00:00:00.000 Started by upstream project "autotest-per-patch" build number 132773 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.078 Using shallow fetch with depth 1 00:00:00.078 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.078 > git --version # timeout=10 00:00:00.096 > git --version # 'git version 2.39.2' 00:00:00.096 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.118 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.118 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.363 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.383 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.395 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.396 > git config core.sparsecheckout # timeout=10 00:00:06.407 > git read-tree -mu HEAD # timeout=10 00:00:06.422 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.442 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.442 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.511 [Pipeline] Start of Pipeline 00:00:06.524 [Pipeline] library 00:00:06.526 Loading library shm_lib@master 00:00:06.526 Library shm_lib@master is cached. Copying from home. 00:00:06.544 [Pipeline] node 00:29:23.467 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:23.469 [Pipeline] { 00:29:23.482 [Pipeline] catchError 00:29:23.483 [Pipeline] { 00:29:23.492 [Pipeline] wrap 00:29:23.503 [Pipeline] { 00:29:23.509 [Pipeline] stage 00:29:23.511 [Pipeline] { (Prologue) 00:29:23.705 [Pipeline] sh 00:29:23.988 + logger -p user.info -t JENKINS-CI 00:29:24.009 [Pipeline] echo 00:29:24.011 Node: GP11 00:29:24.019 [Pipeline] sh 00:29:24.330 [Pipeline] setCustomBuildProperty 00:29:24.345 [Pipeline] echo 00:29:24.347 Cleanup processes 00:29:24.355 [Pipeline] sh 00:29:24.644 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.644 449587 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.660 [Pipeline] sh 00:29:24.947 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.947 ++ awk '{print $1}' 00:29:24.947 ++ grep -v 'sudo pgrep' 00:29:24.947 + sudo kill -9 00:29:24.947 + true 00:29:24.965 [Pipeline] cleanWs 00:29:24.977 [WS-CLEANUP] Deleting project workspace... 00:29:24.977 [WS-CLEANUP] Deferred wipeout is used... 00:29:24.984 [WS-CLEANUP] done 00:29:24.988 [Pipeline] setCustomBuildProperty 00:29:25.006 [Pipeline] sh 00:29:25.293 + sudo git config --global --replace-all safe.directory '*' 00:29:25.403 [Pipeline] httpRequest 00:29:25.754 [Pipeline] echo 00:29:25.755 Sorcerer 10.211.164.101 is alive 00:29:25.763 [Pipeline] retry 00:29:25.764 [Pipeline] { 00:29:25.776 [Pipeline] httpRequest 00:29:25.781 HttpMethod: GET 00:29:25.781 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:29:25.782 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:29:25.786 Response Code: HTTP/1.1 200 OK 00:29:25.786 Success: Status code 200 is in the accepted range: 200,404 00:29:25.786 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:29:25.948 [Pipeline] } 00:29:25.971 [Pipeline] // retry 00:29:25.990 [Pipeline] sh 00:29:26.269 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:29:26.284 [Pipeline] httpRequest 00:29:26.651 [Pipeline] echo 00:29:26.652 Sorcerer 10.211.164.101 is alive 00:29:26.660 [Pipeline] retry 00:29:26.662 [Pipeline] { 00:29:26.674 [Pipeline] httpRequest 00:29:26.678 HttpMethod: GET 00:29:26.679 URL: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 00:29:26.680 Sending request to url: http://10.211.164.101/packages/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 00:29:26.682 Response Code: HTTP/1.1 200 OK 00:29:26.682 Success: Status code 200 is in the accepted range: 200,404 00:29:26.682 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 00:29:29.205 [Pipeline] } 00:29:29.226 [Pipeline] // retry 00:29:29.234 [Pipeline] sh 00:29:29.589 + tar --no-same-owner -xf spdk_66902d69af506c19fa2a7701832daf75f8183e0d.tar.gz 00:29:32.897 [Pipeline] sh 00:29:33.187 + git -C spdk log --oneline -n5 00:29:33.187 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 00:29:33.187 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 00:29:33.187 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:29:33.187 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:29:33.187 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:29:33.199 [Pipeline] } 00:29:33.212 [Pipeline] // stage 00:29:33.221 [Pipeline] stage 00:29:33.224 [Pipeline] { (Prepare) 00:29:33.239 [Pipeline] writeFile 00:29:33.254 [Pipeline] sh 00:29:33.542 + logger -p user.info -t JENKINS-CI 00:29:33.557 [Pipeline] sh 00:29:33.845 + logger -p user.info -t JENKINS-CI 00:29:33.858 [Pipeline] sh 00:29:34.148 + cat autorun-spdk.conf 00:29:34.148 SPDK_RUN_FUNCTIONAL_TEST=1 00:29:34.148 SPDK_TEST_NVMF=1 00:29:34.148 SPDK_TEST_NVME_CLI=1 00:29:34.148 SPDK_TEST_NVMF_TRANSPORT=tcp 00:29:34.148 SPDK_TEST_NVMF_NICS=e810 00:29:34.148 SPDK_TEST_VFIOUSER=1 00:29:34.148 SPDK_RUN_UBSAN=1 00:29:34.148 NET_TYPE=phy 00:29:34.157 RUN_NIGHTLY=0 00:29:34.162 [Pipeline] readFile 00:29:34.187 [Pipeline] withEnv 00:29:34.189 [Pipeline] { 00:29:34.202 [Pipeline] sh 00:29:34.493 + set -ex 00:29:34.493 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:29:34.493 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:29:34.493 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:29:34.493 ++ SPDK_TEST_NVMF=1 00:29:34.493 ++ SPDK_TEST_NVME_CLI=1 00:29:34.493 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:29:34.493 ++ SPDK_TEST_NVMF_NICS=e810 00:29:34.493 ++ SPDK_TEST_VFIOUSER=1 00:29:34.493 ++ SPDK_RUN_UBSAN=1 00:29:34.493 ++ NET_TYPE=phy 00:29:34.493 ++ RUN_NIGHTLY=0 00:29:34.493 + case $SPDK_TEST_NVMF_NICS in 00:29:34.493 + DRIVERS=ice 00:29:34.493 + [[ tcp == \r\d\m\a ]] 00:29:34.493 + [[ -n ice ]] 00:29:34.493 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:29:34.493 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:29:34.493 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:29:34.493 rmmod: ERROR: Module irdma is not currently loaded 00:29:34.493 rmmod: ERROR: Module i40iw is not currently loaded 00:29:34.493 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:29:34.493 + true 00:29:34.493 + for D in $DRIVERS 00:29:34.493 + sudo modprobe ice 00:29:34.493 + exit 0 00:29:34.503 [Pipeline] } 00:29:34.518 [Pipeline] // withEnv 00:29:34.523 [Pipeline] } 00:29:34.537 [Pipeline] // stage 00:29:34.546 [Pipeline] catchError 00:29:34.548 [Pipeline] { 00:29:34.560 [Pipeline] timeout 00:29:34.560 Timeout set to expire in 1 hr 0 min 00:29:34.561 [Pipeline] { 00:29:34.573 [Pipeline] stage 00:29:34.575 [Pipeline] { (Tests) 00:29:34.588 [Pipeline] sh 00:29:34.877 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:34.877 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:34.877 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:34.877 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:29:34.877 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:34.877 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:29:34.877 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:29:34.877 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:29:34.877 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:29:34.877 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:29:34.877 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:29:34.877 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:34.877 + source /etc/os-release 00:29:34.877 ++ NAME='Fedora Linux' 00:29:34.877 ++ VERSION='39 (Cloud Edition)' 00:29:34.877 ++ ID=fedora 00:29:34.877 ++ VERSION_ID=39 00:29:34.877 ++ VERSION_CODENAME= 00:29:34.877 ++ PLATFORM_ID=platform:f39 00:29:34.877 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:29:34.877 ++ ANSI_COLOR='0;38;2;60;110;180' 00:29:34.877 ++ LOGO=fedora-logo-icon 00:29:34.877 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:29:34.877 ++ HOME_URL=https://fedoraproject.org/ 00:29:34.877 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:29:34.877 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:29:34.877 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:29:34.877 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:29:34.877 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:29:34.877 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:29:34.877 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:29:34.877 ++ SUPPORT_END=2024-11-12 00:29:34.877 ++ VARIANT='Cloud Edition' 00:29:34.877 ++ VARIANT_ID=cloud 00:29:34.877 + uname -a 00:29:34.877 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:29:34.877 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:29:36.262 Hugepages 00:29:36.262 node hugesize free / total 00:29:36.262 node0 1048576kB 0 / 0 00:29:36.262 node0 2048kB 0 / 0 00:29:36.262 node1 1048576kB 0 / 0 00:29:36.262 node1 2048kB 0 / 0 00:29:36.262 00:29:36.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:29:36.262 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:29:36.262 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:29:36.262 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:29:36.262 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:29:36.262 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:29:36.262 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:29:36.262 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:29:36.263 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:29:36.263 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:29:36.263 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:29:36.263 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:29:36.263 + rm -f /tmp/spdk-ld-path 00:29:36.263 + source autorun-spdk.conf 00:29:36.263 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:29:36.263 ++ SPDK_TEST_NVMF=1 00:29:36.263 ++ SPDK_TEST_NVME_CLI=1 00:29:36.263 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:29:36.263 ++ SPDK_TEST_NVMF_NICS=e810 00:29:36.263 ++ SPDK_TEST_VFIOUSER=1 00:29:36.263 ++ SPDK_RUN_UBSAN=1 00:29:36.263 ++ NET_TYPE=phy 00:29:36.263 ++ RUN_NIGHTLY=0 00:29:36.263 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:29:36.263 + [[ -n '' ]] 00:29:36.263 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:36.263 + for M in /var/spdk/build-*-manifest.txt 00:29:36.263 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:29:36.263 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:29:36.263 + for M in /var/spdk/build-*-manifest.txt 00:29:36.263 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:29:36.263 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:29:36.263 + for M in /var/spdk/build-*-manifest.txt 00:29:36.263 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:29:36.263 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:29:36.263 ++ uname 00:29:36.263 + [[ Linux == \L\i\n\u\x ]] 00:29:36.263 + sudo dmesg -T 00:29:36.263 + sudo dmesg --clear 00:29:36.263 + dmesg_pid=450268 00:29:36.263 + [[ Fedora Linux == FreeBSD ]] 00:29:36.263 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:36.263 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:36.263 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:29:36.263 + [[ -x /usr/src/fio-static/fio ]] 00:29:36.263 + sudo dmesg -Tw 00:29:36.263 + export FIO_BIN=/usr/src/fio-static/fio 00:29:36.263 + FIO_BIN=/usr/src/fio-static/fio 00:29:36.263 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:29:36.263 + [[ ! -v VFIO_QEMU_BIN ]] 00:29:36.263 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:29:36.263 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:36.263 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:36.263 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:29:36.263 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:36.263 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:36.263 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:29:36.263 05:24:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:29:36.263 05:24:30 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:29:36.263 05:24:30 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:29:36.263 05:24:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:29:36.263 05:24:30 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:29:36.263 05:24:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:29:36.263 05:24:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.263 05:24:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:36.263 05:24:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:36.263 05:24:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.263 05:24:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.263 05:24:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.263 05:24:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.263 05:24:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.263 05:24:30 -- paths/export.sh@5 -- $ export PATH 00:29:36.263 05:24:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.263 05:24:30 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:36.263 05:24:30 -- common/autobuild_common.sh@493 -- $ date +%s 00:29:36.263 05:24:30 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733718270.XXXXXX 00:29:36.263 05:24:30 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733718270.up4Czc 00:29:36.263 05:24:30 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:29:36.263 05:24:30 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:29:36.263 05:24:30 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:36.263 05:24:30 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:36.263 05:24:30 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:36.263 05:24:30 -- common/autobuild_common.sh@509 -- $ get_config_params 00:29:36.263 05:24:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:29:36.263 05:24:30 -- common/autotest_common.sh@10 -- $ set +x 00:29:36.263 05:24:30 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:36.263 05:24:30 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:29:36.263 05:24:30 -- pm/common@17 -- $ local monitor 00:29:36.263 05:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.263 05:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.263 05:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.263 05:24:30 -- pm/common@21 -- $ date +%s 00:29:36.263 05:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:36.263 05:24:30 -- pm/common@21 -- $ date +%s 00:29:36.263 05:24:30 -- pm/common@25 -- $ sleep 1 00:29:36.263 05:24:30 -- pm/common@21 -- $ date +%s 00:29:36.263 05:24:30 -- pm/common@21 -- $ date +%s 00:29:36.263 05:24:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733718270 00:29:36.263 05:24:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733718270 00:29:36.263 05:24:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733718270 00:29:36.263 05:24:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733718270 00:29:36.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733718270_collect-cpu-load.pm.log 00:29:36.263 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733718270_collect-vmstat.pm.log 00:29:36.264 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733718270_collect-cpu-temp.pm.log 00:29:36.264 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733718270_collect-bmc-pm.bmc.pm.log 00:29:37.215 05:24:31 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:29:37.215 05:24:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:29:37.215 05:24:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:29:37.215 05:24:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:37.215 05:24:31 -- spdk/autobuild.sh@16 -- $ date -u 00:29:37.215 Mon Dec 9 04:24:31 AM UTC 2024 00:29:37.215 05:24:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:29:37.215 v25.01-pre-278-g66902d69a 00:29:37.215 05:24:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:29:37.215 05:24:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:29:37.215 05:24:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:29:37.215 05:24:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:37.215 05:24:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:37.215 05:24:31 -- common/autotest_common.sh@10 -- $ set +x 00:29:37.215 ************************************ 00:29:37.215 START TEST ubsan 00:29:37.215 ************************************ 00:29:37.216 05:24:31 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:29:37.216 using ubsan 00:29:37.216 00:29:37.216 real 0m0.000s 00:29:37.216 user 0m0.000s 00:29:37.216 sys 0m0.000s 00:29:37.216 05:24:31 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:29:37.216 05:24:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:29:37.216 ************************************ 00:29:37.216 END TEST ubsan 00:29:37.216 ************************************ 00:29:37.473 05:24:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:29:37.473 05:24:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:29:37.473 05:24:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:29:37.473 05:24:31 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:29:37.473 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:29:37.473 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:29:37.732 Using 'verbs' RDMA provider 00:29:48.300 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:29:58.330 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:29:58.330 Creating mk/config.mk...done. 00:29:58.588 Creating mk/cc.flags.mk...done. 00:29:58.588 Type 'make' to build. 00:29:58.588 05:24:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:29:58.588 05:24:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:58.588 05:24:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:58.588 05:24:52 -- common/autotest_common.sh@10 -- $ set +x 00:29:58.588 ************************************ 00:29:58.588 START TEST make 00:29:58.588 ************************************ 00:29:58.588 05:24:52 make -- common/autotest_common.sh@1129 -- $ make -j48 00:29:58.851 make[1]: Nothing to be done for 'all'. 00:30:00.765 The Meson build system 00:30:00.765 Version: 1.5.0 00:30:00.765 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:30:00.765 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:30:00.765 Build type: native build 00:30:00.765 Project name: libvfio-user 00:30:00.765 Project version: 0.0.1 00:30:00.765 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:30:00.765 C linker for the host machine: cc ld.bfd 2.40-14 00:30:00.765 Host machine cpu family: x86_64 00:30:00.765 Host machine cpu: x86_64 00:30:00.765 Run-time dependency threads found: YES 00:30:00.765 Library dl found: YES 00:30:00.765 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:30:00.765 Run-time dependency json-c found: YES 0.17 00:30:00.765 Run-time dependency cmocka found: YES 1.1.7 00:30:00.765 Program pytest-3 found: NO 00:30:00.765 Program flake8 found: NO 00:30:00.765 Program misspell-fixer found: NO 00:30:00.765 Program restructuredtext-lint found: NO 00:30:00.765 Program valgrind found: YES (/usr/bin/valgrind) 00:30:00.765 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:30:00.765 Compiler for C supports arguments -Wmissing-declarations: YES 00:30:00.765 Compiler for C supports arguments -Wwrite-strings: YES 00:30:00.765 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:30:00.765 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:30:00.765 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:30:00.765 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:30:00.765 Build targets in project: 8 00:30:00.765 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:30:00.765 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:30:00.765 00:30:00.765 libvfio-user 0.0.1 00:30:00.765 00:30:00.765 User defined options 00:30:00.765 buildtype : debug 00:30:00.765 default_library: shared 00:30:00.765 libdir : /usr/local/lib 00:30:00.765 00:30:00.765 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:30:01.354 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:30:01.616 [1/37] Compiling C object samples/null.p/null.c.o 00:30:01.616 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:30:01.616 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:30:01.616 [4/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:30:01.616 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:30:01.616 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:30:01.616 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:30:01.616 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:30:01.616 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:30:01.616 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:30:01.616 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:30:01.616 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:30:01.616 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:30:01.616 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:30:01.877 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:30:01.877 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:30:01.877 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:30:01.877 [18/37] Compiling C object samples/server.p/server.c.o 00:30:01.877 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:30:01.877 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:30:01.877 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:30:01.877 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:30:01.877 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:30:01.877 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:30:01.877 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:30:01.877 [26/37] Compiling C object samples/client.p/client.c.o 00:30:01.878 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:30:01.878 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:30:01.878 [29/37] Linking target samples/client 00:30:01.878 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:30:02.140 [31/37] Linking target test/unit_tests 00:30:02.140 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:30:02.140 [33/37] Linking target samples/server 00:30:02.140 [34/37] Linking target samples/null 00:30:02.140 [35/37] Linking target samples/gpio-pci-idio-16 00:30:02.140 [36/37] Linking target samples/lspci 00:30:02.140 [37/37] Linking target samples/shadow_ioeventfd_server 00:30:02.140 INFO: autodetecting backend as ninja 00:30:02.140 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:30:02.403 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:30:02.975 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:30:02.975 ninja: no work to do. 00:30:08.353 The Meson build system 00:30:08.353 Version: 1.5.0 00:30:08.353 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:30:08.353 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:30:08.353 Build type: native build 00:30:08.353 Program cat found: YES (/usr/bin/cat) 00:30:08.353 Project name: DPDK 00:30:08.353 Project version: 24.03.0 00:30:08.353 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:30:08.353 C linker for the host machine: cc ld.bfd 2.40-14 00:30:08.353 Host machine cpu family: x86_64 00:30:08.353 Host machine cpu: x86_64 00:30:08.353 Message: ## Building in Developer Mode ## 00:30:08.353 Program pkg-config found: YES (/usr/bin/pkg-config) 00:30:08.353 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:30:08.353 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:30:08.353 Program python3 found: YES (/usr/bin/python3) 00:30:08.353 Program cat found: YES (/usr/bin/cat) 00:30:08.353 Compiler for C supports arguments -march=native: YES 00:30:08.353 Checking for size of "void *" : 8 00:30:08.353 Checking for size of "void *" : 8 (cached) 00:30:08.353 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:30:08.353 Library m found: YES 00:30:08.353 Library numa found: YES 00:30:08.353 Has header "numaif.h" : YES 00:30:08.353 Library fdt found: NO 00:30:08.353 Library execinfo found: NO 00:30:08.353 Has header "execinfo.h" : YES 00:30:08.353 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:30:08.353 Run-time dependency libarchive found: NO (tried pkgconfig) 00:30:08.353 Run-time dependency libbsd found: NO (tried pkgconfig) 00:30:08.353 Run-time dependency jansson found: NO (tried pkgconfig) 00:30:08.353 Run-time dependency openssl found: YES 3.1.1 00:30:08.353 Run-time dependency libpcap found: YES 1.10.4 00:30:08.353 Has header "pcap.h" with dependency libpcap: YES 00:30:08.353 Compiler for C supports arguments -Wcast-qual: YES 00:30:08.353 Compiler for C supports arguments -Wdeprecated: YES 00:30:08.353 Compiler for C supports arguments -Wformat: YES 00:30:08.353 Compiler for C supports arguments -Wformat-nonliteral: NO 00:30:08.353 Compiler for C supports arguments -Wformat-security: NO 00:30:08.353 Compiler for C supports arguments -Wmissing-declarations: YES 00:30:08.353 Compiler for C supports arguments -Wmissing-prototypes: YES 00:30:08.353 Compiler for C supports arguments -Wnested-externs: YES 00:30:08.353 Compiler for C supports arguments -Wold-style-definition: YES 00:30:08.353 Compiler for C supports arguments -Wpointer-arith: YES 00:30:08.353 Compiler for C supports arguments -Wsign-compare: YES 00:30:08.353 Compiler for C supports arguments -Wstrict-prototypes: YES 00:30:08.353 Compiler for C supports arguments -Wundef: YES 00:30:08.353 Compiler for C supports arguments -Wwrite-strings: YES 00:30:08.353 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:30:08.353 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:30:08.353 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:30:08.353 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:30:08.353 Program objdump found: YES (/usr/bin/objdump) 00:30:08.353 Compiler for C supports arguments -mavx512f: YES 00:30:08.353 Checking if "AVX512 checking" compiles: YES 00:30:08.353 Fetching value of define "__SSE4_2__" : 1 00:30:08.353 Fetching value of define "__AES__" : 1 00:30:08.353 Fetching value of define "__AVX__" : 1 00:30:08.353 Fetching value of define "__AVX2__" : (undefined) 00:30:08.353 Fetching value of define "__AVX512BW__" : (undefined) 00:30:08.353 Fetching value of define "__AVX512CD__" : (undefined) 00:30:08.353 Fetching value of define "__AVX512DQ__" : (undefined) 00:30:08.353 Fetching value of define "__AVX512F__" : (undefined) 00:30:08.353 Fetching value of define "__AVX512VL__" : (undefined) 00:30:08.353 Fetching value of define "__PCLMUL__" : 1 00:30:08.353 Fetching value of define "__RDRND__" : 1 00:30:08.353 Fetching value of define "__RDSEED__" : (undefined) 00:30:08.353 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:30:08.353 Fetching value of define "__znver1__" : (undefined) 00:30:08.353 Fetching value of define "__znver2__" : (undefined) 00:30:08.353 Fetching value of define "__znver3__" : (undefined) 00:30:08.353 Fetching value of define "__znver4__" : (undefined) 00:30:08.353 Compiler for C supports arguments -Wno-format-truncation: YES 00:30:08.353 Message: lib/log: Defining dependency "log" 00:30:08.353 Message: lib/kvargs: Defining dependency "kvargs" 00:30:08.353 Message: lib/telemetry: Defining dependency "telemetry" 00:30:08.353 Checking for function "getentropy" : NO 00:30:08.353 Message: lib/eal: Defining dependency "eal" 00:30:08.353 Message: lib/ring: Defining dependency "ring" 00:30:08.353 Message: lib/rcu: Defining dependency "rcu" 00:30:08.353 Message: lib/mempool: Defining dependency "mempool" 00:30:08.353 Message: lib/mbuf: Defining dependency "mbuf" 00:30:08.353 Fetching value of define "__PCLMUL__" : 1 (cached) 00:30:08.354 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:30:08.354 Compiler for C supports arguments -mpclmul: YES 00:30:08.354 Compiler for C supports arguments -maes: YES 00:30:08.354 Compiler for C supports arguments -mavx512f: YES (cached) 00:30:08.354 Compiler for C supports arguments -mavx512bw: YES 00:30:08.354 Compiler for C supports arguments -mavx512dq: YES 00:30:08.354 Compiler for C supports arguments -mavx512vl: YES 00:30:08.354 Compiler for C supports arguments -mvpclmulqdq: YES 00:30:08.354 Compiler for C supports arguments -mavx2: YES 00:30:08.354 Compiler for C supports arguments -mavx: YES 00:30:08.354 Message: lib/net: Defining dependency "net" 00:30:08.354 Message: lib/meter: Defining dependency "meter" 00:30:08.354 Message: lib/ethdev: Defining dependency "ethdev" 00:30:08.354 Message: lib/pci: Defining dependency "pci" 00:30:08.354 Message: lib/cmdline: Defining dependency "cmdline" 00:30:08.354 Message: lib/hash: Defining dependency "hash" 00:30:08.354 Message: lib/timer: Defining dependency "timer" 00:30:08.354 Message: lib/compressdev: Defining dependency "compressdev" 00:30:08.354 Message: lib/cryptodev: Defining dependency "cryptodev" 00:30:08.354 Message: lib/dmadev: Defining dependency "dmadev" 00:30:08.354 Compiler for C supports arguments -Wno-cast-qual: YES 00:30:08.354 Message: lib/power: Defining dependency "power" 00:30:08.354 Message: lib/reorder: Defining dependency "reorder" 00:30:08.354 Message: lib/security: Defining dependency "security" 00:30:08.354 Has header "linux/userfaultfd.h" : YES 00:30:08.354 Has header "linux/vduse.h" : YES 00:30:08.354 Message: lib/vhost: Defining dependency "vhost" 00:30:08.354 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:30:08.354 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:30:08.354 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:30:08.354 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:30:08.354 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:30:08.354 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:30:08.354 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:30:08.354 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:30:08.354 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:30:08.354 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:30:08.354 Program doxygen found: YES (/usr/local/bin/doxygen) 00:30:08.354 Configuring doxy-api-html.conf using configuration 00:30:08.354 Configuring doxy-api-man.conf using configuration 00:30:08.354 Program mandb found: YES (/usr/bin/mandb) 00:30:08.354 Program sphinx-build found: NO 00:30:08.354 Configuring rte_build_config.h using configuration 00:30:08.354 Message: 00:30:08.354 ================= 00:30:08.354 Applications Enabled 00:30:08.354 ================= 00:30:08.354 00:30:08.354 apps: 00:30:08.354 00:30:08.354 00:30:08.354 Message: 00:30:08.354 ================= 00:30:08.354 Libraries Enabled 00:30:08.354 ================= 00:30:08.354 00:30:08.354 libs: 00:30:08.354 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:30:08.354 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:30:08.354 cryptodev, dmadev, power, reorder, security, vhost, 00:30:08.354 00:30:08.354 Message: 00:30:08.354 =============== 00:30:08.354 Drivers Enabled 00:30:08.354 =============== 00:30:08.354 00:30:08.354 common: 00:30:08.354 00:30:08.354 bus: 00:30:08.354 pci, vdev, 00:30:08.354 mempool: 00:30:08.354 ring, 00:30:08.354 dma: 00:30:08.354 00:30:08.354 net: 00:30:08.354 00:30:08.354 crypto: 00:30:08.354 00:30:08.354 compress: 00:30:08.354 00:30:08.354 vdpa: 00:30:08.354 00:30:08.354 00:30:08.354 Message: 00:30:08.354 ================= 00:30:08.354 Content Skipped 00:30:08.354 ================= 00:30:08.354 00:30:08.354 apps: 00:30:08.354 dumpcap: explicitly disabled via build config 00:30:08.354 graph: explicitly disabled via build config 00:30:08.354 pdump: explicitly disabled via build config 00:30:08.354 proc-info: explicitly disabled via build config 00:30:08.354 test-acl: explicitly disabled via build config 00:30:08.354 test-bbdev: explicitly disabled via build config 00:30:08.354 test-cmdline: explicitly disabled via build config 00:30:08.354 test-compress-perf: explicitly disabled via build config 00:30:08.354 test-crypto-perf: explicitly disabled via build config 00:30:08.354 test-dma-perf: explicitly disabled via build config 00:30:08.354 test-eventdev: explicitly disabled via build config 00:30:08.354 test-fib: explicitly disabled via build config 00:30:08.354 test-flow-perf: explicitly disabled via build config 00:30:08.354 test-gpudev: explicitly disabled via build config 00:30:08.354 test-mldev: explicitly disabled via build config 00:30:08.354 test-pipeline: explicitly disabled via build config 00:30:08.354 test-pmd: explicitly disabled via build config 00:30:08.354 test-regex: explicitly disabled via build config 00:30:08.354 test-sad: explicitly disabled via build config 00:30:08.354 test-security-perf: explicitly disabled via build config 00:30:08.354 00:30:08.354 libs: 00:30:08.354 argparse: explicitly disabled via build config 00:30:08.354 metrics: explicitly disabled via build config 00:30:08.354 acl: explicitly disabled via build config 00:30:08.354 bbdev: explicitly disabled via build config 00:30:08.354 bitratestats: explicitly disabled via build config 00:30:08.354 bpf: explicitly disabled via build config 00:30:08.354 cfgfile: explicitly disabled via build config 00:30:08.354 distributor: explicitly disabled via build config 00:30:08.354 efd: explicitly disabled via build config 00:30:08.354 eventdev: explicitly disabled via build config 00:30:08.354 dispatcher: explicitly disabled via build config 00:30:08.354 gpudev: explicitly disabled via build config 00:30:08.354 gro: explicitly disabled via build config 00:30:08.354 gso: explicitly disabled via build config 00:30:08.354 ip_frag: explicitly disabled via build config 00:30:08.354 jobstats: explicitly disabled via build config 00:30:08.354 latencystats: explicitly disabled via build config 00:30:08.354 lpm: explicitly disabled via build config 00:30:08.354 member: explicitly disabled via build config 00:30:08.354 pcapng: explicitly disabled via build config 00:30:08.354 rawdev: explicitly disabled via build config 00:30:08.354 regexdev: explicitly disabled via build config 00:30:08.354 mldev: explicitly disabled via build config 00:30:08.354 rib: explicitly disabled via build config 00:30:08.354 sched: explicitly disabled via build config 00:30:08.354 stack: explicitly disabled via build config 00:30:08.354 ipsec: explicitly disabled via build config 00:30:08.354 pdcp: explicitly disabled via build config 00:30:08.354 fib: explicitly disabled via build config 00:30:08.354 port: explicitly disabled via build config 00:30:08.354 pdump: explicitly disabled via build config 00:30:08.354 table: explicitly disabled via build config 00:30:08.354 pipeline: explicitly disabled via build config 00:30:08.354 graph: explicitly disabled via build config 00:30:08.354 node: explicitly disabled via build config 00:30:08.354 00:30:08.354 drivers: 00:30:08.354 common/cpt: not in enabled drivers build config 00:30:08.354 common/dpaax: not in enabled drivers build config 00:30:08.354 common/iavf: not in enabled drivers build config 00:30:08.354 common/idpf: not in enabled drivers build config 00:30:08.354 common/ionic: not in enabled drivers build config 00:30:08.354 common/mvep: not in enabled drivers build config 00:30:08.354 common/octeontx: not in enabled drivers build config 00:30:08.354 bus/auxiliary: not in enabled drivers build config 00:30:08.354 bus/cdx: not in enabled drivers build config 00:30:08.354 bus/dpaa: not in enabled drivers build config 00:30:08.354 bus/fslmc: not in enabled drivers build config 00:30:08.354 bus/ifpga: not in enabled drivers build config 00:30:08.354 bus/platform: not in enabled drivers build config 00:30:08.354 bus/uacce: not in enabled drivers build config 00:30:08.354 bus/vmbus: not in enabled drivers build config 00:30:08.354 common/cnxk: not in enabled drivers build config 00:30:08.354 common/mlx5: not in enabled drivers build config 00:30:08.354 common/nfp: not in enabled drivers build config 00:30:08.354 common/nitrox: not in enabled drivers build config 00:30:08.354 common/qat: not in enabled drivers build config 00:30:08.354 common/sfc_efx: not in enabled drivers build config 00:30:08.354 mempool/bucket: not in enabled drivers build config 00:30:08.354 mempool/cnxk: not in enabled drivers build config 00:30:08.354 mempool/dpaa: not in enabled drivers build config 00:30:08.354 mempool/dpaa2: not in enabled drivers build config 00:30:08.354 mempool/octeontx: not in enabled drivers build config 00:30:08.354 mempool/stack: not in enabled drivers build config 00:30:08.354 dma/cnxk: not in enabled drivers build config 00:30:08.354 dma/dpaa: not in enabled drivers build config 00:30:08.354 dma/dpaa2: not in enabled drivers build config 00:30:08.354 dma/hisilicon: not in enabled drivers build config 00:30:08.354 dma/idxd: not in enabled drivers build config 00:30:08.354 dma/ioat: not in enabled drivers build config 00:30:08.354 dma/skeleton: not in enabled drivers build config 00:30:08.354 net/af_packet: not in enabled drivers build config 00:30:08.354 net/af_xdp: not in enabled drivers build config 00:30:08.354 net/ark: not in enabled drivers build config 00:30:08.354 net/atlantic: not in enabled drivers build config 00:30:08.354 net/avp: not in enabled drivers build config 00:30:08.354 net/axgbe: not in enabled drivers build config 00:30:08.354 net/bnx2x: not in enabled drivers build config 00:30:08.354 net/bnxt: not in enabled drivers build config 00:30:08.354 net/bonding: not in enabled drivers build config 00:30:08.354 net/cnxk: not in enabled drivers build config 00:30:08.354 net/cpfl: not in enabled drivers build config 00:30:08.354 net/cxgbe: not in enabled drivers build config 00:30:08.354 net/dpaa: not in enabled drivers build config 00:30:08.354 net/dpaa2: not in enabled drivers build config 00:30:08.354 net/e1000: not in enabled drivers build config 00:30:08.354 net/ena: not in enabled drivers build config 00:30:08.354 net/enetc: not in enabled drivers build config 00:30:08.355 net/enetfec: not in enabled drivers build config 00:30:08.355 net/enic: not in enabled drivers build config 00:30:08.355 net/failsafe: not in enabled drivers build config 00:30:08.355 net/fm10k: not in enabled drivers build config 00:30:08.355 net/gve: not in enabled drivers build config 00:30:08.355 net/hinic: not in enabled drivers build config 00:30:08.355 net/hns3: not in enabled drivers build config 00:30:08.355 net/i40e: not in enabled drivers build config 00:30:08.355 net/iavf: not in enabled drivers build config 00:30:08.355 net/ice: not in enabled drivers build config 00:30:08.355 net/idpf: not in enabled drivers build config 00:30:08.355 net/igc: not in enabled drivers build config 00:30:08.355 net/ionic: not in enabled drivers build config 00:30:08.355 net/ipn3ke: not in enabled drivers build config 00:30:08.355 net/ixgbe: not in enabled drivers build config 00:30:08.355 net/mana: not in enabled drivers build config 00:30:08.355 net/memif: not in enabled drivers build config 00:30:08.355 net/mlx4: not in enabled drivers build config 00:30:08.355 net/mlx5: not in enabled drivers build config 00:30:08.355 net/mvneta: not in enabled drivers build config 00:30:08.355 net/mvpp2: not in enabled drivers build config 00:30:08.355 net/netvsc: not in enabled drivers build config 00:30:08.355 net/nfb: not in enabled drivers build config 00:30:08.355 net/nfp: not in enabled drivers build config 00:30:08.355 net/ngbe: not in enabled drivers build config 00:30:08.355 net/null: not in enabled drivers build config 00:30:08.355 net/octeontx: not in enabled drivers build config 00:30:08.355 net/octeon_ep: not in enabled drivers build config 00:30:08.355 net/pcap: not in enabled drivers build config 00:30:08.355 net/pfe: not in enabled drivers build config 00:30:08.355 net/qede: not in enabled drivers build config 00:30:08.355 net/ring: not in enabled drivers build config 00:30:08.355 net/sfc: not in enabled drivers build config 00:30:08.355 net/softnic: not in enabled drivers build config 00:30:08.355 net/tap: not in enabled drivers build config 00:30:08.355 net/thunderx: not in enabled drivers build config 00:30:08.355 net/txgbe: not in enabled drivers build config 00:30:08.355 net/vdev_netvsc: not in enabled drivers build config 00:30:08.355 net/vhost: not in enabled drivers build config 00:30:08.355 net/virtio: not in enabled drivers build config 00:30:08.355 net/vmxnet3: not in enabled drivers build config 00:30:08.355 raw/*: missing internal dependency, "rawdev" 00:30:08.355 crypto/armv8: not in enabled drivers build config 00:30:08.355 crypto/bcmfs: not in enabled drivers build config 00:30:08.355 crypto/caam_jr: not in enabled drivers build config 00:30:08.355 crypto/ccp: not in enabled drivers build config 00:30:08.355 crypto/cnxk: not in enabled drivers build config 00:30:08.355 crypto/dpaa_sec: not in enabled drivers build config 00:30:08.355 crypto/dpaa2_sec: not in enabled drivers build config 00:30:08.355 crypto/ipsec_mb: not in enabled drivers build config 00:30:08.355 crypto/mlx5: not in enabled drivers build config 00:30:08.355 crypto/mvsam: not in enabled drivers build config 00:30:08.355 crypto/nitrox: not in enabled drivers build config 00:30:08.355 crypto/null: not in enabled drivers build config 00:30:08.355 crypto/octeontx: not in enabled drivers build config 00:30:08.355 crypto/openssl: not in enabled drivers build config 00:30:08.355 crypto/scheduler: not in enabled drivers build config 00:30:08.355 crypto/uadk: not in enabled drivers build config 00:30:08.355 crypto/virtio: not in enabled drivers build config 00:30:08.355 compress/isal: not in enabled drivers build config 00:30:08.355 compress/mlx5: not in enabled drivers build config 00:30:08.355 compress/nitrox: not in enabled drivers build config 00:30:08.355 compress/octeontx: not in enabled drivers build config 00:30:08.355 compress/zlib: not in enabled drivers build config 00:30:08.355 regex/*: missing internal dependency, "regexdev" 00:30:08.355 ml/*: missing internal dependency, "mldev" 00:30:08.355 vdpa/ifc: not in enabled drivers build config 00:30:08.355 vdpa/mlx5: not in enabled drivers build config 00:30:08.355 vdpa/nfp: not in enabled drivers build config 00:30:08.355 vdpa/sfc: not in enabled drivers build config 00:30:08.355 event/*: missing internal dependency, "eventdev" 00:30:08.355 baseband/*: missing internal dependency, "bbdev" 00:30:08.355 gpu/*: missing internal dependency, "gpudev" 00:30:08.355 00:30:08.355 00:30:08.355 Build targets in project: 85 00:30:08.355 00:30:08.355 DPDK 24.03.0 00:30:08.355 00:30:08.355 User defined options 00:30:08.355 buildtype : debug 00:30:08.355 default_library : shared 00:30:08.355 libdir : lib 00:30:08.355 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:30:08.355 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:30:08.355 c_link_args : 00:30:08.355 cpu_instruction_set: native 00:30:08.355 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:30:08.355 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:30:08.355 enable_docs : false 00:30:08.355 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:30:08.355 enable_kmods : false 00:30:08.355 max_lcores : 128 00:30:08.355 tests : false 00:30:08.355 00:30:08.355 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:30:08.622 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:30:08.622 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:30:08.622 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:30:08.882 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:30:08.882 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:30:08.882 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:30:08.882 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:30:08.882 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:30:08.882 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:30:08.882 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:30:08.882 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:30:08.882 [11/268] Linking static target lib/librte_kvargs.a 00:30:08.882 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:30:08.882 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:30:08.882 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:30:08.882 [15/268] Linking static target lib/librte_log.a 00:30:08.882 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:30:09.452 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:30:09.719 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:30:09.719 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:30:09.719 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:30:09.719 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:30:09.719 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:30:09.719 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:30:09.719 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:30:09.719 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:30:09.719 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:30:09.719 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:30:09.719 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:30:09.719 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:30:09.719 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:30:09.719 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:30:09.719 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:30:09.719 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:30:09.719 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:30:09.719 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:30:09.719 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:30:09.719 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:30:09.719 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:30:09.719 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:30:09.719 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:30:09.719 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:30:09.719 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:30:09.719 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:30:09.719 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:30:09.719 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:30:09.719 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:30:09.719 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:30:09.719 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:30:09.719 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:30:09.719 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:30:09.719 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:30:09.719 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:30:09.719 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:30:09.719 [54/268] Linking static target lib/librte_telemetry.a 00:30:09.719 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:30:09.719 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:30:09.719 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:30:09.719 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:30:09.720 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:30:09.981 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:30:09.981 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:30:09.981 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:30:09.981 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:30:09.981 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:30:09.981 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:30:09.981 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:30:10.240 [67/268] Linking target lib/librte_log.so.24.1 00:30:10.240 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:30:10.240 [69/268] Linking static target lib/librte_pci.a 00:30:10.505 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:30:10.505 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:30:10.505 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:30:10.505 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:30:10.505 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:30:10.505 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:30:10.505 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:30:10.505 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:30:10.505 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:30:10.505 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:30:10.763 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:30:10.763 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:30:10.763 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:30:10.763 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:30:10.763 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:30:10.763 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:30:10.763 [86/268] Linking target lib/librte_kvargs.so.24.1 00:30:10.763 [87/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:30:10.763 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:30:10.763 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:30:10.763 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:30:10.764 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:30:10.764 [92/268] Linking static target lib/librte_ring.a 00:30:10.764 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:30:10.764 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:30:10.764 [95/268] Linking static target lib/librte_meter.a 00:30:10.764 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:30:10.764 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:30:10.764 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:30:10.764 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:30:10.764 [100/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:30:10.764 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:30:10.764 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:30:10.764 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:30:10.764 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:30:10.764 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:30:10.764 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:30:10.764 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:30:10.764 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:30:10.764 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:30:10.764 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:30:11.026 [111/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.026 [112/268] Linking static target lib/librte_eal.a 00:30:11.026 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:30:11.026 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:30:11.026 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:30:11.026 [116/268] Linking static target lib/librte_rcu.a 00:30:11.026 [117/268] Linking static target lib/librte_mempool.a 00:30:11.026 [118/268] Linking target lib/librte_telemetry.so.24.1 00:30:11.026 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:30:11.026 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:30:11.026 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:30:11.026 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:30:11.026 [123/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:30:11.026 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:30:11.026 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:30:11.026 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:30:11.026 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:30:11.026 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:30:11.026 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:30:11.285 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:30:11.285 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:30:11.285 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:30:11.285 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.285 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:30:11.285 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.285 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:30:11.285 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:30:11.285 [138/268] Linking static target lib/librte_net.a 00:30:11.544 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:30:11.544 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:30:11.544 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:30:11.544 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:30:11.544 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:30:11.544 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.544 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:30:11.544 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:30:11.544 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:30:11.544 [148/268] Linking static target lib/librte_cmdline.a 00:30:11.803 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:30:11.803 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:30:11.803 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:30:11.803 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:30:11.803 [153/268] Linking static target lib/librte_timer.a 00:30:11.803 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:30:11.803 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:30:11.803 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:30:11.803 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:30:11.803 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.803 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:30:12.062 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:30:12.062 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:30:12.062 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:30:12.062 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:30:12.062 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:30:12.062 [165/268] Linking static target lib/librte_dmadev.a 00:30:12.062 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:30:12.062 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:30:12.062 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.063 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:30:12.063 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:30:12.063 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:30:12.063 [172/268] Linking static target lib/librte_power.a 00:30:12.320 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:30:12.320 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:30:12.320 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.320 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:30:12.320 [177/268] Linking static target lib/librte_hash.a 00:30:12.320 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:30:12.320 [179/268] Linking static target lib/librte_compressdev.a 00:30:12.320 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:30:12.320 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:30:12.320 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:30:12.320 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:30:12.320 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:30:12.320 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:30:12.579 [186/268] Linking static target lib/librte_mbuf.a 00:30:12.579 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.579 [188/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.579 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:30:12.579 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:30:12.579 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:30:12.579 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:30:12.579 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:30:12.579 [194/268] Linking static target lib/librte_reorder.a 00:30:12.579 [195/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:30:12.579 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:30:12.579 [197/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:30:12.579 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:30:12.579 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:30:12.837 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.837 [201/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.837 [202/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:30:12.837 [203/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.837 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:30:12.837 [205/268] Linking static target lib/librte_security.a 00:30:12.837 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:30:12.837 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:30:12.837 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:30:12.837 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:30:12.837 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:30:12.837 [211/268] Linking static target drivers/librte_bus_pci.a 00:30:12.837 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:30:12.837 [213/268] Linking static target drivers/librte_mempool_ring.a 00:30:12.837 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:30:12.837 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:30:12.837 [216/268] Linking static target drivers/librte_bus_vdev.a 00:30:12.837 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:30:12.837 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:30:13.096 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:30:13.096 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:13.096 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:30:13.096 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:30:13.096 [223/268] Linking static target lib/librte_ethdev.a 00:30:13.355 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:30:13.355 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:30:13.355 [226/268] Linking static target lib/librte_cryptodev.a 00:30:14.289 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:15.662 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:30:17.556 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:30:17.556 [230/268] Linking target lib/librte_eal.so.24.1 00:30:17.556 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:30:17.556 [232/268] Linking target lib/librte_timer.so.24.1 00:30:17.556 [233/268] Linking target lib/librte_ring.so.24.1 00:30:17.556 [234/268] Linking target lib/librte_pci.so.24.1 00:30:17.556 [235/268] Linking target lib/librte_meter.so.24.1 00:30:17.556 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:30:17.556 [237/268] Linking target lib/librte_dmadev.so.24.1 00:30:17.556 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:17.556 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:30:17.556 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:30:17.556 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:30:17.556 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:30:17.556 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:30:17.813 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:30:17.813 [245/268] Linking target lib/librte_rcu.so.24.1 00:30:17.813 [246/268] Linking target lib/librte_mempool.so.24.1 00:30:17.813 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:30:17.813 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:30:17.813 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:30:17.813 [250/268] Linking target lib/librte_mbuf.so.24.1 00:30:18.070 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:30:18.070 [252/268] Linking target lib/librte_reorder.so.24.1 00:30:18.070 [253/268] Linking target lib/librte_compressdev.so.24.1 00:30:18.070 [254/268] Linking target lib/librte_net.so.24.1 00:30:18.070 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:30:18.070 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:30:18.070 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:30:18.327 [258/268] Linking target lib/librte_security.so.24.1 00:30:18.327 [259/268] Linking target lib/librte_hash.so.24.1 00:30:18.327 [260/268] Linking target lib/librte_cmdline.so.24.1 00:30:18.327 [261/268] Linking target lib/librte_ethdev.so.24.1 00:30:18.327 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:30:18.327 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:30:18.327 [264/268] Linking target lib/librte_power.so.24.1 00:30:21.607 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:30:21.607 [266/268] Linking static target lib/librte_vhost.a 00:30:22.539 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:30:22.539 [268/268] Linking target lib/librte_vhost.so.24.1 00:30:22.539 INFO: autodetecting backend as ninja 00:30:22.539 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:30:44.455 CC lib/ut_mock/mock.o 00:30:44.455 CC lib/ut/ut.o 00:30:44.455 CC lib/log/log.o 00:30:44.455 CC lib/log/log_flags.o 00:30:44.455 CC lib/log/log_deprecated.o 00:30:44.455 LIB libspdk_ut.a 00:30:44.455 LIB libspdk_ut_mock.a 00:30:44.455 LIB libspdk_log.a 00:30:44.455 SO libspdk_ut.so.2.0 00:30:44.455 SO libspdk_ut_mock.so.6.0 00:30:44.455 SO libspdk_log.so.7.1 00:30:44.455 SYMLINK libspdk_ut.so 00:30:44.455 SYMLINK libspdk_ut_mock.so 00:30:44.455 SYMLINK libspdk_log.so 00:30:44.455 CXX lib/trace_parser/trace.o 00:30:44.455 CC lib/dma/dma.o 00:30:44.455 CC lib/ioat/ioat.o 00:30:44.455 CC lib/util/base64.o 00:30:44.455 CC lib/util/bit_array.o 00:30:44.455 CC lib/util/cpuset.o 00:30:44.455 CC lib/util/crc16.o 00:30:44.455 CC lib/util/crc32.o 00:30:44.455 CC lib/util/crc32c.o 00:30:44.455 CC lib/util/crc32_ieee.o 00:30:44.455 CC lib/util/crc64.o 00:30:44.455 CC lib/util/dif.o 00:30:44.455 CC lib/util/fd.o 00:30:44.455 CC lib/util/fd_group.o 00:30:44.455 CC lib/util/file.o 00:30:44.455 CC lib/util/hexlify.o 00:30:44.455 CC lib/util/iov.o 00:30:44.455 CC lib/util/math.o 00:30:44.455 CC lib/util/net.o 00:30:44.455 CC lib/util/pipe.o 00:30:44.455 CC lib/util/strerror_tls.o 00:30:44.455 CC lib/util/string.o 00:30:44.455 CC lib/util/uuid.o 00:30:44.455 CC lib/util/xor.o 00:30:44.455 CC lib/util/md5.o 00:30:44.455 CC lib/util/zipf.o 00:30:44.455 CC lib/vfio_user/host/vfio_user_pci.o 00:30:44.455 CC lib/vfio_user/host/vfio_user.o 00:30:44.455 LIB libspdk_dma.a 00:30:44.455 SO libspdk_dma.so.5.0 00:30:44.455 LIB libspdk_ioat.a 00:30:44.455 SYMLINK libspdk_dma.so 00:30:44.455 SO libspdk_ioat.so.7.0 00:30:44.455 SYMLINK libspdk_ioat.so 00:30:44.455 LIB libspdk_vfio_user.a 00:30:44.455 SO libspdk_vfio_user.so.5.0 00:30:44.455 SYMLINK libspdk_vfio_user.so 00:30:44.455 LIB libspdk_util.a 00:30:44.455 SO libspdk_util.so.10.1 00:30:44.455 SYMLINK libspdk_util.so 00:30:44.455 LIB libspdk_trace_parser.a 00:30:44.455 SO libspdk_trace_parser.so.6.0 00:30:44.455 CC lib/env_dpdk/env.o 00:30:44.455 CC lib/conf/conf.o 00:30:44.455 CC lib/idxd/idxd.o 00:30:44.455 CC lib/env_dpdk/memory.o 00:30:44.455 CC lib/json/json_parse.o 00:30:44.455 CC lib/vmd/vmd.o 00:30:44.455 CC lib/idxd/idxd_user.o 00:30:44.455 CC lib/env_dpdk/pci.o 00:30:44.455 CC lib/vmd/led.o 00:30:44.455 CC lib/json/json_util.o 00:30:44.455 CC lib/idxd/idxd_kernel.o 00:30:44.455 CC lib/json/json_write.o 00:30:44.455 CC lib/env_dpdk/init.o 00:30:44.455 CC lib/env_dpdk/threads.o 00:30:44.455 CC lib/rdma_utils/rdma_utils.o 00:30:44.455 CC lib/env_dpdk/pci_ioat.o 00:30:44.455 CC lib/env_dpdk/pci_virtio.o 00:30:44.455 CC lib/env_dpdk/pci_vmd.o 00:30:44.455 CC lib/env_dpdk/pci_idxd.o 00:30:44.455 CC lib/env_dpdk/pci_event.o 00:30:44.455 CC lib/env_dpdk/sigbus_handler.o 00:30:44.455 CC lib/env_dpdk/pci_dpdk.o 00:30:44.455 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:44.455 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:44.455 SYMLINK libspdk_trace_parser.so 00:30:44.455 LIB libspdk_conf.a 00:30:44.455 SO libspdk_conf.so.6.0 00:30:44.455 LIB libspdk_rdma_utils.a 00:30:44.455 SYMLINK libspdk_conf.so 00:30:44.455 LIB libspdk_json.a 00:30:44.455 SO libspdk_rdma_utils.so.1.0 00:30:44.455 SO libspdk_json.so.6.0 00:30:44.455 SYMLINK libspdk_rdma_utils.so 00:30:44.455 SYMLINK libspdk_json.so 00:30:44.455 CC lib/rdma_provider/common.o 00:30:44.455 CC lib/rdma_provider/rdma_provider_verbs.o 00:30:44.455 CC lib/jsonrpc/jsonrpc_server.o 00:30:44.455 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:44.455 CC lib/jsonrpc/jsonrpc_client.o 00:30:44.455 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:44.455 LIB libspdk_idxd.a 00:30:44.455 SO libspdk_idxd.so.12.1 00:30:44.455 LIB libspdk_vmd.a 00:30:44.455 SYMLINK libspdk_idxd.so 00:30:44.455 SO libspdk_vmd.so.6.0 00:30:44.455 SYMLINK libspdk_vmd.so 00:30:44.455 LIB libspdk_rdma_provider.a 00:30:44.455 SO libspdk_rdma_provider.so.7.0 00:30:44.455 LIB libspdk_jsonrpc.a 00:30:44.455 SYMLINK libspdk_rdma_provider.so 00:30:44.455 SO libspdk_jsonrpc.so.6.0 00:30:44.455 SYMLINK libspdk_jsonrpc.so 00:30:44.455 CC lib/rpc/rpc.o 00:30:44.455 LIB libspdk_rpc.a 00:30:44.455 SO libspdk_rpc.so.6.0 00:30:44.455 SYMLINK libspdk_rpc.so 00:30:44.455 CC lib/trace/trace.o 00:30:44.455 CC lib/notify/notify.o 00:30:44.455 CC lib/trace/trace_flags.o 00:30:44.455 CC lib/notify/notify_rpc.o 00:30:44.455 CC lib/trace/trace_rpc.o 00:30:44.455 CC lib/keyring/keyring.o 00:30:44.455 CC lib/keyring/keyring_rpc.o 00:30:44.455 LIB libspdk_notify.a 00:30:44.455 SO libspdk_notify.so.6.0 00:30:44.455 SYMLINK libspdk_notify.so 00:30:44.455 LIB libspdk_keyring.a 00:30:44.455 LIB libspdk_trace.a 00:30:44.455 SO libspdk_keyring.so.2.0 00:30:44.455 SO libspdk_trace.so.11.0 00:30:44.455 SYMLINK libspdk_keyring.so 00:30:44.714 SYMLINK libspdk_trace.so 00:30:44.714 LIB libspdk_env_dpdk.a 00:30:44.714 CC lib/thread/thread.o 00:30:44.714 CC lib/thread/iobuf.o 00:30:44.714 CC lib/sock/sock.o 00:30:44.714 CC lib/sock/sock_rpc.o 00:30:44.714 SO libspdk_env_dpdk.so.15.1 00:30:44.972 SYMLINK libspdk_env_dpdk.so 00:30:45.229 LIB libspdk_sock.a 00:30:45.229 SO libspdk_sock.so.10.0 00:30:45.229 SYMLINK libspdk_sock.so 00:30:45.488 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:45.488 CC lib/nvme/nvme_ctrlr.o 00:30:45.488 CC lib/nvme/nvme_fabric.o 00:30:45.488 CC lib/nvme/nvme_ns_cmd.o 00:30:45.488 CC lib/nvme/nvme_ns.o 00:30:45.488 CC lib/nvme/nvme_pcie_common.o 00:30:45.488 CC lib/nvme/nvme_pcie.o 00:30:45.488 CC lib/nvme/nvme_qpair.o 00:30:45.488 CC lib/nvme/nvme.o 00:30:45.488 CC lib/nvme/nvme_quirks.o 00:30:45.488 CC lib/nvme/nvme_transport.o 00:30:45.488 CC lib/nvme/nvme_discovery.o 00:30:45.488 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:45.488 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:45.488 CC lib/nvme/nvme_tcp.o 00:30:45.488 CC lib/nvme/nvme_opal.o 00:30:45.488 CC lib/nvme/nvme_io_msg.o 00:30:45.488 CC lib/nvme/nvme_poll_group.o 00:30:45.488 CC lib/nvme/nvme_zns.o 00:30:45.488 CC lib/nvme/nvme_stubs.o 00:30:45.488 CC lib/nvme/nvme_auth.o 00:30:45.488 CC lib/nvme/nvme_vfio_user.o 00:30:45.488 CC lib/nvme/nvme_cuse.o 00:30:45.488 CC lib/nvme/nvme_rdma.o 00:30:46.421 LIB libspdk_thread.a 00:30:46.421 SO libspdk_thread.so.11.0 00:30:46.421 SYMLINK libspdk_thread.so 00:30:46.679 CC lib/accel/accel.o 00:30:46.679 CC lib/init/json_config.o 00:30:46.679 CC lib/virtio/virtio.o 00:30:46.679 CC lib/accel/accel_rpc.o 00:30:46.679 CC lib/init/subsystem.o 00:30:46.679 CC lib/virtio/virtio_vhost_user.o 00:30:46.679 CC lib/blob/blobstore.o 00:30:46.679 CC lib/accel/accel_sw.o 00:30:46.679 CC lib/init/subsystem_rpc.o 00:30:46.679 CC lib/virtio/virtio_vfio_user.o 00:30:46.679 CC lib/init/rpc.o 00:30:46.679 CC lib/blob/request.o 00:30:46.679 CC lib/virtio/virtio_pci.o 00:30:46.679 CC lib/blob/zeroes.o 00:30:46.679 CC lib/vfu_tgt/tgt_endpoint.o 00:30:46.679 CC lib/blob/blob_bs_dev.o 00:30:46.679 CC lib/vfu_tgt/tgt_rpc.o 00:30:46.679 CC lib/fsdev/fsdev_io.o 00:30:46.679 CC lib/fsdev/fsdev.o 00:30:46.679 CC lib/fsdev/fsdev_rpc.o 00:30:46.936 LIB libspdk_init.a 00:30:46.936 SO libspdk_init.so.6.0 00:30:46.936 SYMLINK libspdk_init.so 00:30:46.936 LIB libspdk_vfu_tgt.a 00:30:46.936 SO libspdk_vfu_tgt.so.3.0 00:30:47.193 LIB libspdk_virtio.a 00:30:47.193 SYMLINK libspdk_vfu_tgt.so 00:30:47.193 SO libspdk_virtio.so.7.0 00:30:47.193 CC lib/event/app.o 00:30:47.193 CC lib/event/reactor.o 00:30:47.193 CC lib/event/log_rpc.o 00:30:47.193 CC lib/event/app_rpc.o 00:30:47.193 CC lib/event/scheduler_static.o 00:30:47.193 SYMLINK libspdk_virtio.so 00:30:47.450 LIB libspdk_fsdev.a 00:30:47.450 SO libspdk_fsdev.so.2.0 00:30:47.450 SYMLINK libspdk_fsdev.so 00:30:47.708 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:30:47.708 LIB libspdk_event.a 00:30:47.708 SO libspdk_event.so.14.0 00:30:47.708 SYMLINK libspdk_event.so 00:30:47.708 LIB libspdk_accel.a 00:30:47.965 SO libspdk_accel.so.16.0 00:30:47.965 SYMLINK libspdk_accel.so 00:30:47.965 LIB libspdk_nvme.a 00:30:47.965 SO libspdk_nvme.so.15.0 00:30:47.965 CC lib/bdev/bdev.o 00:30:47.965 CC lib/bdev/bdev_rpc.o 00:30:47.965 CC lib/bdev/bdev_zone.o 00:30:47.965 CC lib/bdev/part.o 00:30:47.965 CC lib/bdev/scsi_nvme.o 00:30:48.223 SYMLINK libspdk_nvme.so 00:30:48.223 LIB libspdk_fuse_dispatcher.a 00:30:48.223 SO libspdk_fuse_dispatcher.so.1.0 00:30:48.223 SYMLINK libspdk_fuse_dispatcher.so 00:30:49.601 LIB libspdk_blob.a 00:30:49.859 SO libspdk_blob.so.12.0 00:30:49.859 SYMLINK libspdk_blob.so 00:30:49.859 CC lib/lvol/lvol.o 00:30:49.859 CC lib/blobfs/blobfs.o 00:30:49.859 CC lib/blobfs/tree.o 00:30:50.794 LIB libspdk_bdev.a 00:30:50.794 LIB libspdk_blobfs.a 00:30:50.794 SO libspdk_bdev.so.17.0 00:30:50.794 SO libspdk_blobfs.so.11.0 00:30:50.794 SYMLINK libspdk_blobfs.so 00:30:50.794 SYMLINK libspdk_bdev.so 00:30:51.057 LIB libspdk_lvol.a 00:30:51.057 SO libspdk_lvol.so.11.0 00:30:51.057 SYMLINK libspdk_lvol.so 00:30:51.057 CC lib/scsi/dev.o 00:30:51.057 CC lib/nbd/nbd.o 00:30:51.057 CC lib/nvmf/ctrlr.o 00:30:51.057 CC lib/ublk/ublk.o 00:30:51.057 CC lib/nbd/nbd_rpc.o 00:30:51.057 CC lib/scsi/lun.o 00:30:51.057 CC lib/nvmf/ctrlr_discovery.o 00:30:51.057 CC lib/ublk/ublk_rpc.o 00:30:51.057 CC lib/ftl/ftl_core.o 00:30:51.057 CC lib/nvmf/ctrlr_bdev.o 00:30:51.057 CC lib/ftl/ftl_init.o 00:30:51.057 CC lib/scsi/port.o 00:30:51.057 CC lib/nvmf/subsystem.o 00:30:51.057 CC lib/scsi/scsi.o 00:30:51.057 CC lib/nvmf/nvmf.o 00:30:51.057 CC lib/ftl/ftl_layout.o 00:30:51.057 CC lib/ftl/ftl_debug.o 00:30:51.057 CC lib/scsi/scsi_bdev.o 00:30:51.057 CC lib/nvmf/nvmf_rpc.o 00:30:51.057 CC lib/ftl/ftl_io.o 00:30:51.057 CC lib/scsi/scsi_pr.o 00:30:51.057 CC lib/ftl/ftl_sb.o 00:30:51.057 CC lib/nvmf/transport.o 00:30:51.057 CC lib/nvmf/tcp.o 00:30:51.057 CC lib/ftl/ftl_l2p.o 00:30:51.057 CC lib/scsi/scsi_rpc.o 00:30:51.057 CC lib/nvmf/stubs.o 00:30:51.057 CC lib/scsi/task.o 00:30:51.057 CC lib/ftl/ftl_l2p_flat.o 00:30:51.057 CC lib/nvmf/mdns_server.o 00:30:51.057 CC lib/nvmf/vfio_user.o 00:30:51.057 CC lib/ftl/ftl_nv_cache.o 00:30:51.057 CC lib/ftl/ftl_band.o 00:30:51.057 CC lib/nvmf/rdma.o 00:30:51.057 CC lib/ftl/ftl_band_ops.o 00:30:51.057 CC lib/nvmf/auth.o 00:30:51.057 CC lib/ftl/ftl_writer.o 00:30:51.057 CC lib/ftl/ftl_reloc.o 00:30:51.057 CC lib/ftl/ftl_rq.o 00:30:51.057 CC lib/ftl/ftl_l2p_cache.o 00:30:51.057 CC lib/ftl/ftl_p2l.o 00:30:51.057 CC lib/ftl/ftl_p2l_log.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:51.057 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:51.315 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:51.315 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:51.315 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:51.315 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:51.315 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:51.580 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:51.580 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:51.580 CC lib/ftl/utils/ftl_conf.o 00:30:51.580 CC lib/ftl/utils/ftl_md.o 00:30:51.580 CC lib/ftl/utils/ftl_mempool.o 00:30:51.580 CC lib/ftl/utils/ftl_bitmap.o 00:30:51.580 CC lib/ftl/utils/ftl_property.o 00:30:51.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:51.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:30:51.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:51.861 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:51.861 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:51.861 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:51.861 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:30:51.861 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:30:51.861 CC lib/ftl/base/ftl_base_dev.o 00:30:51.861 CC lib/ftl/base/ftl_base_bdev.o 00:30:51.861 CC lib/ftl/ftl_trace.o 00:30:51.861 LIB libspdk_nbd.a 00:30:51.861 SO libspdk_nbd.so.7.0 00:30:51.861 LIB libspdk_scsi.a 00:30:52.119 SYMLINK libspdk_nbd.so 00:30:52.119 SO libspdk_scsi.so.9.0 00:30:52.119 SYMLINK libspdk_scsi.so 00:30:52.119 LIB libspdk_ublk.a 00:30:52.119 SO libspdk_ublk.so.3.0 00:30:52.119 SYMLINK libspdk_ublk.so 00:30:52.377 CC lib/iscsi/conn.o 00:30:52.377 CC lib/vhost/vhost.o 00:30:52.377 CC lib/vhost/vhost_rpc.o 00:30:52.377 CC lib/iscsi/init_grp.o 00:30:52.377 CC lib/vhost/vhost_scsi.o 00:30:52.377 CC lib/iscsi/iscsi.o 00:30:52.377 CC lib/vhost/vhost_blk.o 00:30:52.377 CC lib/iscsi/param.o 00:30:52.377 CC lib/vhost/rte_vhost_user.o 00:30:52.377 CC lib/iscsi/portal_grp.o 00:30:52.377 CC lib/iscsi/tgt_node.o 00:30:52.377 CC lib/iscsi/iscsi_subsystem.o 00:30:52.377 CC lib/iscsi/iscsi_rpc.o 00:30:52.377 CC lib/iscsi/task.o 00:30:52.635 LIB libspdk_ftl.a 00:30:52.635 SO libspdk_ftl.so.9.0 00:30:52.893 SYMLINK libspdk_ftl.so 00:30:53.459 LIB libspdk_vhost.a 00:30:53.459 SO libspdk_vhost.so.8.0 00:30:53.717 SYMLINK libspdk_vhost.so 00:30:53.717 LIB libspdk_nvmf.a 00:30:53.717 LIB libspdk_iscsi.a 00:30:53.717 SO libspdk_nvmf.so.20.0 00:30:53.717 SO libspdk_iscsi.so.8.0 00:30:53.974 SYMLINK libspdk_iscsi.so 00:30:53.974 SYMLINK libspdk_nvmf.so 00:30:54.231 CC module/env_dpdk/env_dpdk_rpc.o 00:30:54.231 CC module/vfu_device/vfu_virtio.o 00:30:54.231 CC module/vfu_device/vfu_virtio_blk.o 00:30:54.231 CC module/vfu_device/vfu_virtio_scsi.o 00:30:54.231 CC module/vfu_device/vfu_virtio_rpc.o 00:30:54.231 CC module/vfu_device/vfu_virtio_fs.o 00:30:54.231 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:54.231 CC module/sock/posix/posix.o 00:30:54.231 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:54.231 CC module/scheduler/gscheduler/gscheduler.o 00:30:54.231 CC module/accel/iaa/accel_iaa.o 00:30:54.231 CC module/accel/dsa/accel_dsa.o 00:30:54.231 CC module/accel/iaa/accel_iaa_rpc.o 00:30:54.231 CC module/accel/error/accel_error.o 00:30:54.231 CC module/accel/dsa/accel_dsa_rpc.o 00:30:54.231 CC module/keyring/linux/keyring.o 00:30:54.231 CC module/accel/error/accel_error_rpc.o 00:30:54.231 CC module/accel/ioat/accel_ioat.o 00:30:54.231 CC module/keyring/linux/keyring_rpc.o 00:30:54.231 CC module/blob/bdev/blob_bdev.o 00:30:54.231 CC module/accel/ioat/accel_ioat_rpc.o 00:30:54.231 CC module/keyring/file/keyring.o 00:30:54.231 CC module/fsdev/aio/fsdev_aio.o 00:30:54.231 CC module/keyring/file/keyring_rpc.o 00:30:54.231 CC module/fsdev/aio/fsdev_aio_rpc.o 00:30:54.231 CC module/fsdev/aio/linux_aio_mgr.o 00:30:54.489 LIB libspdk_env_dpdk_rpc.a 00:30:54.489 SO libspdk_env_dpdk_rpc.so.6.0 00:30:54.489 SYMLINK libspdk_env_dpdk_rpc.so 00:30:54.489 LIB libspdk_keyring_linux.a 00:30:54.489 LIB libspdk_keyring_file.a 00:30:54.489 LIB libspdk_scheduler_gscheduler.a 00:30:54.489 SO libspdk_keyring_file.so.2.0 00:30:54.489 SO libspdk_keyring_linux.so.1.0 00:30:54.489 SO libspdk_scheduler_gscheduler.so.4.0 00:30:54.489 LIB libspdk_accel_ioat.a 00:30:54.489 LIB libspdk_scheduler_dynamic.a 00:30:54.489 LIB libspdk_accel_iaa.a 00:30:54.489 LIB libspdk_accel_error.a 00:30:54.489 SO libspdk_scheduler_dynamic.so.4.0 00:30:54.489 SO libspdk_accel_ioat.so.6.0 00:30:54.489 SYMLINK libspdk_scheduler_gscheduler.so 00:30:54.489 SYMLINK libspdk_keyring_file.so 00:30:54.489 SYMLINK libspdk_keyring_linux.so 00:30:54.489 LIB libspdk_scheduler_dpdk_governor.a 00:30:54.489 SO libspdk_accel_error.so.2.0 00:30:54.489 SO libspdk_accel_iaa.so.3.0 00:30:54.747 SO libspdk_scheduler_dpdk_governor.so.4.0 00:30:54.747 SYMLINK libspdk_scheduler_dynamic.so 00:30:54.747 SYMLINK libspdk_accel_ioat.so 00:30:54.747 LIB libspdk_blob_bdev.a 00:30:54.747 LIB libspdk_accel_dsa.a 00:30:54.747 SYMLINK libspdk_accel_error.so 00:30:54.747 SYMLINK libspdk_accel_iaa.so 00:30:54.747 SO libspdk_blob_bdev.so.12.0 00:30:54.747 SO libspdk_accel_dsa.so.5.0 00:30:54.747 SYMLINK libspdk_scheduler_dpdk_governor.so 00:30:54.747 SYMLINK libspdk_blob_bdev.so 00:30:54.747 SYMLINK libspdk_accel_dsa.so 00:30:55.005 LIB libspdk_vfu_device.a 00:30:55.005 SO libspdk_vfu_device.so.3.0 00:30:55.005 CC module/bdev/delay/vbdev_delay.o 00:30:55.005 CC module/bdev/gpt/gpt.o 00:30:55.005 CC module/bdev/null/bdev_null.o 00:30:55.005 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:55.005 CC module/bdev/gpt/vbdev_gpt.o 00:30:55.005 CC module/bdev/error/vbdev_error.o 00:30:55.005 CC module/bdev/null/bdev_null_rpc.o 00:30:55.005 CC module/bdev/error/vbdev_error_rpc.o 00:30:55.005 CC module/blobfs/bdev/blobfs_bdev.o 00:30:55.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:55.005 CC module/bdev/lvol/vbdev_lvol.o 00:30:55.005 CC module/bdev/malloc/bdev_malloc.o 00:30:55.005 CC module/bdev/ftl/bdev_ftl.o 00:30:55.005 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:55.005 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:55.005 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:55.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:55.005 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:55.005 CC module/bdev/iscsi/bdev_iscsi.o 00:30:55.005 CC module/bdev/split/vbdev_split.o 00:30:55.005 CC module/bdev/raid/bdev_raid.o 00:30:55.005 CC module/bdev/raid/bdev_raid_rpc.o 00:30:55.005 CC module/bdev/passthru/vbdev_passthru.o 00:30:55.005 CC module/bdev/raid/bdev_raid_sb.o 00:30:55.005 CC module/bdev/split/vbdev_split_rpc.o 00:30:55.005 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:30:55.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:55.005 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:55.005 CC module/bdev/nvme/bdev_nvme.o 00:30:55.005 CC module/bdev/raid/raid0.o 00:30:55.005 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:55.005 CC module/bdev/raid/raid1.o 00:30:55.005 CC module/bdev/nvme/nvme_rpc.o 00:30:55.005 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:55.005 CC module/bdev/raid/concat.o 00:30:55.005 CC module/bdev/nvme/bdev_mdns_client.o 00:30:55.005 CC module/bdev/aio/bdev_aio.o 00:30:55.005 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:55.005 CC module/bdev/aio/bdev_aio_rpc.o 00:30:55.005 CC module/bdev/nvme/vbdev_opal.o 00:30:55.005 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:55.005 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:55.005 SYMLINK libspdk_vfu_device.so 00:30:55.274 LIB libspdk_fsdev_aio.a 00:30:55.274 LIB libspdk_sock_posix.a 00:30:55.274 SO libspdk_fsdev_aio.so.1.0 00:30:55.274 SO libspdk_sock_posix.so.6.0 00:30:55.274 SYMLINK libspdk_fsdev_aio.so 00:30:55.274 SYMLINK libspdk_sock_posix.so 00:30:55.274 LIB libspdk_blobfs_bdev.a 00:30:55.530 SO libspdk_blobfs_bdev.so.6.0 00:30:55.530 LIB libspdk_bdev_split.a 00:30:55.530 LIB libspdk_bdev_zone_block.a 00:30:55.530 LIB libspdk_bdev_ftl.a 00:30:55.530 SYMLINK libspdk_blobfs_bdev.so 00:30:55.530 SO libspdk_bdev_split.so.6.0 00:30:55.530 SO libspdk_bdev_zone_block.so.6.0 00:30:55.530 SO libspdk_bdev_ftl.so.6.0 00:30:55.530 LIB libspdk_bdev_gpt.a 00:30:55.530 LIB libspdk_bdev_null.a 00:30:55.530 LIB libspdk_bdev_error.a 00:30:55.530 SO libspdk_bdev_gpt.so.6.0 00:30:55.530 LIB libspdk_bdev_passthru.a 00:30:55.530 SO libspdk_bdev_null.so.6.0 00:30:55.530 SO libspdk_bdev_error.so.6.0 00:30:55.530 SYMLINK libspdk_bdev_split.so 00:30:55.530 SYMLINK libspdk_bdev_zone_block.so 00:30:55.530 SYMLINK libspdk_bdev_ftl.so 00:30:55.530 SO libspdk_bdev_passthru.so.6.0 00:30:55.530 LIB libspdk_bdev_delay.a 00:30:55.530 SYMLINK libspdk_bdev_gpt.so 00:30:55.530 SYMLINK libspdk_bdev_null.so 00:30:55.530 SO libspdk_bdev_delay.so.6.0 00:30:55.530 SYMLINK libspdk_bdev_error.so 00:30:55.530 LIB libspdk_bdev_iscsi.a 00:30:55.530 SYMLINK libspdk_bdev_passthru.so 00:30:55.530 LIB libspdk_bdev_lvol.a 00:30:55.530 LIB libspdk_bdev_aio.a 00:30:55.530 SO libspdk_bdev_iscsi.so.6.0 00:30:55.530 SO libspdk_bdev_lvol.so.6.0 00:30:55.530 SO libspdk_bdev_aio.so.6.0 00:30:55.787 SYMLINK libspdk_bdev_delay.so 00:30:55.787 LIB libspdk_bdev_malloc.a 00:30:55.787 SO libspdk_bdev_malloc.so.6.0 00:30:55.787 SYMLINK libspdk_bdev_iscsi.so 00:30:55.787 SYMLINK libspdk_bdev_lvol.so 00:30:55.787 SYMLINK libspdk_bdev_aio.so 00:30:55.787 SYMLINK libspdk_bdev_malloc.so 00:30:55.787 LIB libspdk_bdev_virtio.a 00:30:55.787 SO libspdk_bdev_virtio.so.6.0 00:30:55.787 SYMLINK libspdk_bdev_virtio.so 00:30:56.385 LIB libspdk_bdev_raid.a 00:30:56.385 SO libspdk_bdev_raid.so.6.0 00:30:56.385 SYMLINK libspdk_bdev_raid.so 00:30:57.765 LIB libspdk_bdev_nvme.a 00:30:57.765 SO libspdk_bdev_nvme.so.7.1 00:30:57.765 SYMLINK libspdk_bdev_nvme.so 00:30:58.330 CC module/event/subsystems/scheduler/scheduler.o 00:30:58.330 CC module/event/subsystems/sock/sock.o 00:30:58.330 CC module/event/subsystems/keyring/keyring.o 00:30:58.330 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:30:58.330 CC module/event/subsystems/iobuf/iobuf.o 00:30:58.330 CC module/event/subsystems/fsdev/fsdev.o 00:30:58.330 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:58.330 CC module/event/subsystems/vmd/vmd.o 00:30:58.330 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:58.330 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:58.330 LIB libspdk_event_keyring.a 00:30:58.330 LIB libspdk_event_vhost_blk.a 00:30:58.330 LIB libspdk_event_vfu_tgt.a 00:30:58.330 LIB libspdk_event_fsdev.a 00:30:58.330 LIB libspdk_event_sock.a 00:30:58.330 LIB libspdk_event_vmd.a 00:30:58.330 LIB libspdk_event_scheduler.a 00:30:58.330 SO libspdk_event_keyring.so.1.0 00:30:58.330 SO libspdk_event_vhost_blk.so.3.0 00:30:58.330 SO libspdk_event_vfu_tgt.so.3.0 00:30:58.330 LIB libspdk_event_iobuf.a 00:30:58.330 SO libspdk_event_fsdev.so.1.0 00:30:58.330 SO libspdk_event_sock.so.5.0 00:30:58.330 SO libspdk_event_scheduler.so.4.0 00:30:58.330 SO libspdk_event_vmd.so.6.0 00:30:58.330 SO libspdk_event_iobuf.so.3.0 00:30:58.330 SYMLINK libspdk_event_keyring.so 00:30:58.330 SYMLINK libspdk_event_vhost_blk.so 00:30:58.330 SYMLINK libspdk_event_vfu_tgt.so 00:30:58.330 SYMLINK libspdk_event_fsdev.so 00:30:58.587 SYMLINK libspdk_event_sock.so 00:30:58.587 SYMLINK libspdk_event_scheduler.so 00:30:58.587 SYMLINK libspdk_event_vmd.so 00:30:58.587 SYMLINK libspdk_event_iobuf.so 00:30:58.587 CC module/event/subsystems/accel/accel.o 00:30:58.846 LIB libspdk_event_accel.a 00:30:58.846 SO libspdk_event_accel.so.6.0 00:30:58.846 SYMLINK libspdk_event_accel.so 00:30:59.104 CC module/event/subsystems/bdev/bdev.o 00:30:59.360 LIB libspdk_event_bdev.a 00:30:59.360 SO libspdk_event_bdev.so.6.0 00:30:59.360 SYMLINK libspdk_event_bdev.so 00:30:59.360 CC module/event/subsystems/ublk/ublk.o 00:30:59.360 CC module/event/subsystems/scsi/scsi.o 00:30:59.360 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:59.360 CC module/event/subsystems/nbd/nbd.o 00:30:59.360 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:59.617 LIB libspdk_event_nbd.a 00:30:59.617 LIB libspdk_event_ublk.a 00:30:59.617 LIB libspdk_event_scsi.a 00:30:59.617 SO libspdk_event_nbd.so.6.0 00:30:59.617 SO libspdk_event_ublk.so.3.0 00:30:59.617 SO libspdk_event_scsi.so.6.0 00:30:59.617 SYMLINK libspdk_event_nbd.so 00:30:59.617 SYMLINK libspdk_event_ublk.so 00:30:59.617 SYMLINK libspdk_event_scsi.so 00:30:59.617 LIB libspdk_event_nvmf.a 00:30:59.874 SO libspdk_event_nvmf.so.6.0 00:30:59.874 SYMLINK libspdk_event_nvmf.so 00:30:59.874 CC module/event/subsystems/iscsi/iscsi.o 00:30:59.874 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:31:00.132 LIB libspdk_event_vhost_scsi.a 00:31:00.132 LIB libspdk_event_iscsi.a 00:31:00.132 SO libspdk_event_vhost_scsi.so.3.0 00:31:00.132 SO libspdk_event_iscsi.so.6.0 00:31:00.132 SYMLINK libspdk_event_iscsi.so 00:31:00.132 SYMLINK libspdk_event_vhost_scsi.so 00:31:00.132 SO libspdk.so.6.0 00:31:00.132 SYMLINK libspdk.so 00:31:00.393 TEST_HEADER include/spdk/accel.h 00:31:00.393 CC test/rpc_client/rpc_client_test.o 00:31:00.393 TEST_HEADER include/spdk/accel_module.h 00:31:00.393 TEST_HEADER include/spdk/assert.h 00:31:00.393 CXX app/trace/trace.o 00:31:00.393 TEST_HEADER include/spdk/barrier.h 00:31:00.393 TEST_HEADER include/spdk/base64.h 00:31:00.393 TEST_HEADER include/spdk/bdev.h 00:31:00.393 CC app/trace_record/trace_record.o 00:31:00.393 TEST_HEADER include/spdk/bdev_module.h 00:31:00.393 TEST_HEADER include/spdk/bdev_zone.h 00:31:00.393 CC app/spdk_lspci/spdk_lspci.o 00:31:00.393 TEST_HEADER include/spdk/bit_array.h 00:31:00.393 CC app/spdk_nvme_perf/perf.o 00:31:00.393 CC app/spdk_nvme_discover/discovery_aer.o 00:31:00.393 TEST_HEADER include/spdk/bit_pool.h 00:31:00.393 CC app/spdk_top/spdk_top.o 00:31:00.393 TEST_HEADER include/spdk/blob_bdev.h 00:31:00.393 TEST_HEADER include/spdk/blobfs_bdev.h 00:31:00.393 CC app/spdk_nvme_identify/identify.o 00:31:00.393 TEST_HEADER include/spdk/blobfs.h 00:31:00.393 TEST_HEADER include/spdk/blob.h 00:31:00.393 TEST_HEADER include/spdk/conf.h 00:31:00.393 TEST_HEADER include/spdk/config.h 00:31:00.393 TEST_HEADER include/spdk/cpuset.h 00:31:00.393 TEST_HEADER include/spdk/crc16.h 00:31:00.393 TEST_HEADER include/spdk/crc64.h 00:31:00.393 TEST_HEADER include/spdk/crc32.h 00:31:00.393 TEST_HEADER include/spdk/dif.h 00:31:00.393 TEST_HEADER include/spdk/dma.h 00:31:00.393 TEST_HEADER include/spdk/endian.h 00:31:00.393 TEST_HEADER include/spdk/env_dpdk.h 00:31:00.393 TEST_HEADER include/spdk/env.h 00:31:00.393 TEST_HEADER include/spdk/event.h 00:31:00.393 TEST_HEADER include/spdk/fd_group.h 00:31:00.393 TEST_HEADER include/spdk/file.h 00:31:00.393 TEST_HEADER include/spdk/fd.h 00:31:00.393 TEST_HEADER include/spdk/fsdev.h 00:31:00.393 TEST_HEADER include/spdk/fsdev_module.h 00:31:00.393 TEST_HEADER include/spdk/ftl.h 00:31:00.393 TEST_HEADER include/spdk/fuse_dispatcher.h 00:31:00.393 TEST_HEADER include/spdk/gpt_spec.h 00:31:00.393 TEST_HEADER include/spdk/hexlify.h 00:31:00.393 TEST_HEADER include/spdk/histogram_data.h 00:31:00.393 TEST_HEADER include/spdk/idxd.h 00:31:00.393 TEST_HEADER include/spdk/idxd_spec.h 00:31:00.393 TEST_HEADER include/spdk/init.h 00:31:00.393 TEST_HEADER include/spdk/ioat.h 00:31:00.393 TEST_HEADER include/spdk/ioat_spec.h 00:31:00.393 TEST_HEADER include/spdk/iscsi_spec.h 00:31:00.393 TEST_HEADER include/spdk/json.h 00:31:00.393 TEST_HEADER include/spdk/jsonrpc.h 00:31:00.393 TEST_HEADER include/spdk/keyring.h 00:31:00.393 TEST_HEADER include/spdk/keyring_module.h 00:31:00.393 TEST_HEADER include/spdk/likely.h 00:31:00.393 TEST_HEADER include/spdk/log.h 00:31:00.393 TEST_HEADER include/spdk/lvol.h 00:31:00.393 TEST_HEADER include/spdk/md5.h 00:31:00.393 TEST_HEADER include/spdk/memory.h 00:31:00.393 TEST_HEADER include/spdk/mmio.h 00:31:00.393 TEST_HEADER include/spdk/nbd.h 00:31:00.393 TEST_HEADER include/spdk/net.h 00:31:00.393 TEST_HEADER include/spdk/notify.h 00:31:00.393 TEST_HEADER include/spdk/nvme.h 00:31:00.393 TEST_HEADER include/spdk/nvme_intel.h 00:31:00.393 TEST_HEADER include/spdk/nvme_ocssd.h 00:31:00.393 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:31:00.393 TEST_HEADER include/spdk/nvme_spec.h 00:31:00.393 TEST_HEADER include/spdk/nvme_zns.h 00:31:00.393 TEST_HEADER include/spdk/nvmf_cmd.h 00:31:00.393 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:31:00.393 TEST_HEADER include/spdk/nvmf.h 00:31:00.393 TEST_HEADER include/spdk/nvmf_transport.h 00:31:00.393 TEST_HEADER include/spdk/nvmf_spec.h 00:31:00.393 TEST_HEADER include/spdk/opal.h 00:31:00.394 TEST_HEADER include/spdk/opal_spec.h 00:31:00.394 TEST_HEADER include/spdk/pci_ids.h 00:31:00.394 TEST_HEADER include/spdk/queue.h 00:31:00.394 TEST_HEADER include/spdk/pipe.h 00:31:00.394 TEST_HEADER include/spdk/reduce.h 00:31:00.394 TEST_HEADER include/spdk/scsi.h 00:31:00.394 TEST_HEADER include/spdk/rpc.h 00:31:00.394 TEST_HEADER include/spdk/scheduler.h 00:31:00.394 TEST_HEADER include/spdk/sock.h 00:31:00.394 TEST_HEADER include/spdk/scsi_spec.h 00:31:00.394 TEST_HEADER include/spdk/stdinc.h 00:31:00.394 TEST_HEADER include/spdk/string.h 00:31:00.394 TEST_HEADER include/spdk/trace.h 00:31:00.394 TEST_HEADER include/spdk/thread.h 00:31:00.394 TEST_HEADER include/spdk/trace_parser.h 00:31:00.394 TEST_HEADER include/spdk/tree.h 00:31:00.394 TEST_HEADER include/spdk/ublk.h 00:31:00.394 TEST_HEADER include/spdk/util.h 00:31:00.394 TEST_HEADER include/spdk/uuid.h 00:31:00.394 TEST_HEADER include/spdk/version.h 00:31:00.394 TEST_HEADER include/spdk/vfio_user_pci.h 00:31:00.394 TEST_HEADER include/spdk/vfio_user_spec.h 00:31:00.394 TEST_HEADER include/spdk/vhost.h 00:31:00.394 TEST_HEADER include/spdk/vmd.h 00:31:00.394 TEST_HEADER include/spdk/xor.h 00:31:00.394 TEST_HEADER include/spdk/zipf.h 00:31:00.394 CXX test/cpp_headers/accel.o 00:31:00.394 CC examples/interrupt_tgt/interrupt_tgt.o 00:31:00.394 CXX test/cpp_headers/accel_module.o 00:31:00.394 CXX test/cpp_headers/assert.o 00:31:00.394 CXX test/cpp_headers/barrier.o 00:31:00.394 CXX test/cpp_headers/base64.o 00:31:00.394 CXX test/cpp_headers/bdev.o 00:31:00.394 CXX test/cpp_headers/bdev_module.o 00:31:00.394 CXX test/cpp_headers/bdev_zone.o 00:31:00.394 CXX test/cpp_headers/bit_array.o 00:31:00.394 CXX test/cpp_headers/bit_pool.o 00:31:00.394 CXX test/cpp_headers/blob_bdev.o 00:31:00.394 CXX test/cpp_headers/blobfs_bdev.o 00:31:00.394 CXX test/cpp_headers/blobfs.o 00:31:00.394 CXX test/cpp_headers/blob.o 00:31:00.394 CXX test/cpp_headers/conf.o 00:31:00.394 CC app/spdk_dd/spdk_dd.o 00:31:00.394 CXX test/cpp_headers/config.o 00:31:00.394 CXX test/cpp_headers/cpuset.o 00:31:00.394 CXX test/cpp_headers/crc16.o 00:31:00.394 CC app/nvmf_tgt/nvmf_main.o 00:31:00.394 CC app/iscsi_tgt/iscsi_tgt.o 00:31:00.668 CXX test/cpp_headers/crc32.o 00:31:00.668 CC examples/util/zipf/zipf.o 00:31:00.668 CC examples/ioat/perf/perf.o 00:31:00.668 CC examples/ioat/verify/verify.o 00:31:00.668 CC test/env/vtophys/vtophys.o 00:31:00.668 CC test/thread/poller_perf/poller_perf.o 00:31:00.668 CC test/env/memory/memory_ut.o 00:31:00.668 CC test/env/pci/pci_ut.o 00:31:00.668 CC test/app/jsoncat/jsoncat.o 00:31:00.668 CC test/app/histogram_perf/histogram_perf.o 00:31:00.668 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:31:00.669 CC test/app/stub/stub.o 00:31:00.669 CC app/spdk_tgt/spdk_tgt.o 00:31:00.669 CC app/fio/nvme/fio_plugin.o 00:31:00.669 CC app/fio/bdev/fio_plugin.o 00:31:00.669 CC test/dma/test_dma/test_dma.o 00:31:00.669 CC test/app/bdev_svc/bdev_svc.o 00:31:00.669 CC test/env/mem_callbacks/mem_callbacks.o 00:31:00.669 LINK spdk_lspci 00:31:00.927 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:31:00.927 LINK rpc_client_test 00:31:00.927 LINK zipf 00:31:00.927 LINK interrupt_tgt 00:31:00.927 LINK spdk_nvme_discover 00:31:00.927 LINK histogram_perf 00:31:00.927 LINK jsoncat 00:31:00.927 LINK poller_perf 00:31:00.927 LINK nvmf_tgt 00:31:00.927 CXX test/cpp_headers/crc64.o 00:31:00.927 LINK vtophys 00:31:00.927 CXX test/cpp_headers/dif.o 00:31:00.927 LINK env_dpdk_post_init 00:31:00.927 CXX test/cpp_headers/dma.o 00:31:00.927 CXX test/cpp_headers/endian.o 00:31:00.927 CXX test/cpp_headers/env_dpdk.o 00:31:00.927 CXX test/cpp_headers/env.o 00:31:00.927 LINK spdk_trace_record 00:31:00.927 CXX test/cpp_headers/event.o 00:31:00.927 CXX test/cpp_headers/fd_group.o 00:31:00.927 LINK stub 00:31:00.927 CXX test/cpp_headers/fd.o 00:31:00.927 CXX test/cpp_headers/file.o 00:31:00.927 CXX test/cpp_headers/fsdev.o 00:31:00.927 LINK iscsi_tgt 00:31:00.927 CXX test/cpp_headers/fsdev_module.o 00:31:00.927 LINK ioat_perf 00:31:00.927 LINK verify 00:31:00.927 CXX test/cpp_headers/ftl.o 00:31:00.927 CXX test/cpp_headers/fuse_dispatcher.o 00:31:00.928 CXX test/cpp_headers/gpt_spec.o 00:31:01.186 CXX test/cpp_headers/hexlify.o 00:31:01.186 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:31:01.186 CXX test/cpp_headers/histogram_data.o 00:31:01.186 LINK bdev_svc 00:31:01.186 LINK spdk_tgt 00:31:01.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:31:01.186 CXX test/cpp_headers/idxd.o 00:31:01.186 CXX test/cpp_headers/idxd_spec.o 00:31:01.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:31:01.186 CXX test/cpp_headers/init.o 00:31:01.186 CXX test/cpp_headers/ioat.o 00:31:01.186 CXX test/cpp_headers/ioat_spec.o 00:31:01.186 CXX test/cpp_headers/iscsi_spec.o 00:31:01.186 CXX test/cpp_headers/json.o 00:31:01.186 LINK spdk_dd 00:31:01.449 CXX test/cpp_headers/jsonrpc.o 00:31:01.449 CXX test/cpp_headers/keyring.o 00:31:01.449 CXX test/cpp_headers/keyring_module.o 00:31:01.449 LINK spdk_trace 00:31:01.449 CXX test/cpp_headers/likely.o 00:31:01.449 CXX test/cpp_headers/log.o 00:31:01.449 CXX test/cpp_headers/lvol.o 00:31:01.449 CXX test/cpp_headers/md5.o 00:31:01.449 CXX test/cpp_headers/memory.o 00:31:01.449 CXX test/cpp_headers/mmio.o 00:31:01.449 CXX test/cpp_headers/nbd.o 00:31:01.449 CXX test/cpp_headers/net.o 00:31:01.449 CXX test/cpp_headers/notify.o 00:31:01.449 LINK pci_ut 00:31:01.449 CXX test/cpp_headers/nvme.o 00:31:01.449 CXX test/cpp_headers/nvme_intel.o 00:31:01.449 CXX test/cpp_headers/nvme_ocssd.o 00:31:01.449 CXX test/cpp_headers/nvme_ocssd_spec.o 00:31:01.449 CXX test/cpp_headers/nvme_spec.o 00:31:01.449 CXX test/cpp_headers/nvme_zns.o 00:31:01.449 CXX test/cpp_headers/nvmf_cmd.o 00:31:01.449 CC test/event/event_perf/event_perf.o 00:31:01.449 CXX test/cpp_headers/nvmf_fc_spec.o 00:31:01.708 CC test/event/reactor_perf/reactor_perf.o 00:31:01.708 CC test/event/reactor/reactor.o 00:31:01.708 CXX test/cpp_headers/nvmf.o 00:31:01.708 CXX test/cpp_headers/nvmf_spec.o 00:31:01.708 CXX test/cpp_headers/nvmf_transport.o 00:31:01.708 CC examples/sock/hello_world/hello_sock.o 00:31:01.708 CC examples/vmd/lsvmd/lsvmd.o 00:31:01.708 LINK spdk_bdev 00:31:01.708 CXX test/cpp_headers/opal.o 00:31:01.708 CC examples/thread/thread/thread_ex.o 00:31:01.708 CC examples/idxd/perf/perf.o 00:31:01.708 CC test/event/app_repeat/app_repeat.o 00:31:01.708 CC examples/vmd/led/led.o 00:31:01.708 LINK nvme_fuzz 00:31:01.708 CXX test/cpp_headers/opal_spec.o 00:31:01.708 CXX test/cpp_headers/pci_ids.o 00:31:01.708 CXX test/cpp_headers/pipe.o 00:31:01.708 CC test/event/scheduler/scheduler.o 00:31:01.708 LINK spdk_nvme 00:31:01.708 LINK test_dma 00:31:01.708 CXX test/cpp_headers/queue.o 00:31:01.708 CXX test/cpp_headers/reduce.o 00:31:01.708 CXX test/cpp_headers/rpc.o 00:31:01.708 CXX test/cpp_headers/scheduler.o 00:31:01.708 CXX test/cpp_headers/scsi.o 00:31:01.708 CXX test/cpp_headers/scsi_spec.o 00:31:01.708 CXX test/cpp_headers/sock.o 00:31:01.708 CXX test/cpp_headers/stdinc.o 00:31:01.708 CXX test/cpp_headers/string.o 00:31:01.708 CXX test/cpp_headers/thread.o 00:31:01.967 CXX test/cpp_headers/trace.o 00:31:01.967 CXX test/cpp_headers/trace_parser.o 00:31:01.967 LINK reactor_perf 00:31:01.967 LINK event_perf 00:31:01.967 CXX test/cpp_headers/tree.o 00:31:01.967 LINK reactor 00:31:01.967 CXX test/cpp_headers/ublk.o 00:31:01.967 CXX test/cpp_headers/util.o 00:31:01.967 CXX test/cpp_headers/uuid.o 00:31:01.967 CXX test/cpp_headers/version.o 00:31:01.967 CXX test/cpp_headers/vfio_user_pci.o 00:31:01.967 LINK mem_callbacks 00:31:01.967 CXX test/cpp_headers/vfio_user_spec.o 00:31:01.967 CXX test/cpp_headers/vhost.o 00:31:01.967 LINK lsvmd 00:31:01.967 CXX test/cpp_headers/vmd.o 00:31:01.967 CXX test/cpp_headers/xor.o 00:31:01.967 LINK led 00:31:01.967 LINK spdk_nvme_perf 00:31:01.967 CXX test/cpp_headers/zipf.o 00:31:01.967 LINK app_repeat 00:31:01.967 LINK vhost_fuzz 00:31:01.967 CC app/vhost/vhost.o 00:31:02.224 LINK spdk_nvme_identify 00:31:02.224 LINK hello_sock 00:31:02.224 LINK thread 00:31:02.224 LINK scheduler 00:31:02.224 LINK spdk_top 00:31:02.224 LINK idxd_perf 00:31:02.224 CC test/nvme/overhead/overhead.o 00:31:02.224 CC test/nvme/aer/aer.o 00:31:02.224 CC test/nvme/fused_ordering/fused_ordering.o 00:31:02.484 CC test/nvme/reset/reset.o 00:31:02.484 CC test/nvme/boot_partition/boot_partition.o 00:31:02.484 CC test/nvme/e2edp/nvme_dp.o 00:31:02.484 CC test/nvme/connect_stress/connect_stress.o 00:31:02.484 CC test/nvme/simple_copy/simple_copy.o 00:31:02.484 CC test/nvme/compliance/nvme_compliance.o 00:31:02.484 CC test/nvme/reserve/reserve.o 00:31:02.484 CC test/nvme/sgl/sgl.o 00:31:02.484 CC test/nvme/err_injection/err_injection.o 00:31:02.484 CC test/nvme/doorbell_aers/doorbell_aers.o 00:31:02.484 CC test/nvme/cuse/cuse.o 00:31:02.484 CC test/nvme/startup/startup.o 00:31:02.484 CC test/nvme/fdp/fdp.o 00:31:02.484 LINK vhost 00:31:02.484 CC test/blobfs/mkfs/mkfs.o 00:31:02.484 CC test/accel/dif/dif.o 00:31:02.484 CC test/lvol/esnap/esnap.o 00:31:02.484 CC examples/nvme/reconnect/reconnect.o 00:31:02.484 CC examples/nvme/hotplug/hotplug.o 00:31:02.484 CC examples/nvme/hello_world/hello_world.o 00:31:02.484 CC examples/nvme/abort/abort.o 00:31:02.484 CC examples/nvme/nvme_manage/nvme_manage.o 00:31:02.484 CC examples/nvme/cmb_copy/cmb_copy.o 00:31:02.484 CC examples/nvme/arbitration/arbitration.o 00:31:02.484 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:31:02.742 LINK startup 00:31:02.742 LINK connect_stress 00:31:02.742 LINK fused_ordering 00:31:02.742 LINK doorbell_aers 00:31:02.742 LINK err_injection 00:31:02.742 CC examples/accel/perf/accel_perf.o 00:31:02.742 LINK boot_partition 00:31:02.742 LINK simple_copy 00:31:02.742 LINK reset 00:31:02.742 LINK reserve 00:31:02.742 LINK overhead 00:31:02.742 LINK mkfs 00:31:02.742 CC examples/blob/cli/blobcli.o 00:31:02.742 CC examples/fsdev/hello_world/hello_fsdev.o 00:31:02.742 CC examples/blob/hello_world/hello_blob.o 00:31:02.742 LINK nvme_compliance 00:31:02.742 LINK fdp 00:31:02.742 LINK memory_ut 00:31:02.742 LINK sgl 00:31:02.742 LINK nvme_dp 00:31:02.742 LINK hello_world 00:31:02.742 LINK cmb_copy 00:31:03.000 LINK aer 00:31:03.000 LINK pmr_persistence 00:31:03.000 LINK hotplug 00:31:03.000 LINK abort 00:31:03.000 LINK arbitration 00:31:03.000 LINK reconnect 00:31:03.000 LINK hello_fsdev 00:31:03.259 LINK hello_blob 00:31:03.259 LINK nvme_manage 00:31:03.259 LINK accel_perf 00:31:03.259 LINK dif 00:31:03.517 LINK blobcli 00:31:03.517 LINK iscsi_fuzz 00:31:03.775 CC examples/bdev/hello_world/hello_bdev.o 00:31:03.775 CC examples/bdev/bdevperf/bdevperf.o 00:31:03.775 CC test/bdev/bdevio/bdevio.o 00:31:04.034 LINK hello_bdev 00:31:04.034 LINK cuse 00:31:04.034 LINK bdevio 00:31:04.600 LINK bdevperf 00:31:04.858 CC examples/nvmf/nvmf/nvmf.o 00:31:05.115 LINK nvmf 00:31:07.648 LINK esnap 00:31:07.907 00:31:07.907 real 1m9.408s 00:31:07.907 user 11m53.180s 00:31:07.907 sys 2m37.451s 00:31:07.907 05:26:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:31:07.907 05:26:01 make -- common/autotest_common.sh@10 -- $ set +x 00:31:07.907 ************************************ 00:31:07.907 END TEST make 00:31:07.907 ************************************ 00:31:07.907 05:26:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:31:07.907 05:26:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:07.907 05:26:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:07.907 05:26:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:07.907 05:26:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:07.907 05:26:02 -- pm/common@44 -- $ pid=450310 00:31:07.907 05:26:02 -- pm/common@50 -- $ kill -TERM 450310 00:31:07.907 05:26:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:07.907 05:26:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:07.907 05:26:02 -- pm/common@44 -- $ pid=450312 00:31:07.907 05:26:02 -- pm/common@50 -- $ kill -TERM 450312 00:31:07.907 05:26:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:07.907 05:26:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:07.907 05:26:02 -- pm/common@44 -- $ pid=450314 00:31:07.907 05:26:02 -- pm/common@50 -- $ kill -TERM 450314 00:31:07.907 05:26:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:07.907 05:26:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:07.907 05:26:02 -- pm/common@44 -- $ pid=450345 00:31:07.907 05:26:02 -- pm/common@50 -- $ sudo -E kill -TERM 450345 00:31:07.907 05:26:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:31:07.907 05:26:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:31:07.907 05:26:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:07.907 05:26:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:31:07.907 05:26:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.166 05:26:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.166 05:26:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.166 05:26:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.166 05:26:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.166 05:26:02 -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.166 05:26:02 -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.166 05:26:02 -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.166 05:26:02 -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.166 05:26:02 -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.166 05:26:02 -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.166 05:26:02 -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.166 05:26:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.166 05:26:02 -- scripts/common.sh@344 -- # case "$op" in 00:31:08.166 05:26:02 -- scripts/common.sh@345 -- # : 1 00:31:08.166 05:26:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.166 05:26:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.166 05:26:02 -- scripts/common.sh@365 -- # decimal 1 00:31:08.166 05:26:02 -- scripts/common.sh@353 -- # local d=1 00:31:08.166 05:26:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.166 05:26:02 -- scripts/common.sh@355 -- # echo 1 00:31:08.166 05:26:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.166 05:26:02 -- scripts/common.sh@366 -- # decimal 2 00:31:08.166 05:26:02 -- scripts/common.sh@353 -- # local d=2 00:31:08.166 05:26:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.166 05:26:02 -- scripts/common.sh@355 -- # echo 2 00:31:08.166 05:26:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.166 05:26:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.166 05:26:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.166 05:26:02 -- scripts/common.sh@368 -- # return 0 00:31:08.166 05:26:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.166 05:26:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.166 --rc genhtml_branch_coverage=1 00:31:08.166 --rc genhtml_function_coverage=1 00:31:08.166 --rc genhtml_legend=1 00:31:08.166 --rc geninfo_all_blocks=1 00:31:08.166 --rc geninfo_unexecuted_blocks=1 00:31:08.166 00:31:08.166 ' 00:31:08.166 05:26:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.166 --rc genhtml_branch_coverage=1 00:31:08.166 --rc genhtml_function_coverage=1 00:31:08.166 --rc genhtml_legend=1 00:31:08.166 --rc geninfo_all_blocks=1 00:31:08.166 --rc geninfo_unexecuted_blocks=1 00:31:08.166 00:31:08.166 ' 00:31:08.166 05:26:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.166 --rc genhtml_branch_coverage=1 00:31:08.166 --rc genhtml_function_coverage=1 00:31:08.166 --rc genhtml_legend=1 00:31:08.166 --rc geninfo_all_blocks=1 00:31:08.166 --rc geninfo_unexecuted_blocks=1 00:31:08.166 00:31:08.166 ' 00:31:08.166 05:26:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.166 --rc genhtml_branch_coverage=1 00:31:08.166 --rc genhtml_function_coverage=1 00:31:08.166 --rc genhtml_legend=1 00:31:08.166 --rc geninfo_all_blocks=1 00:31:08.166 --rc geninfo_unexecuted_blocks=1 00:31:08.166 00:31:08.166 ' 00:31:08.166 05:26:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.166 05:26:02 -- nvmf/common.sh@7 -- # uname -s 00:31:08.166 05:26:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.166 05:26:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.166 05:26:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.166 05:26:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.166 05:26:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.166 05:26:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.166 05:26:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.166 05:26:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.166 05:26:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.166 05:26:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.166 05:26:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.166 05:26:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:08.166 05:26:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.166 05:26:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.166 05:26:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.166 05:26:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.166 05:26:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.166 05:26:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.166 05:26:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.166 05:26:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.166 05:26:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.166 05:26:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.166 05:26:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.166 05:26:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.166 05:26:02 -- paths/export.sh@5 -- # export PATH 00:31:08.166 05:26:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.166 05:26:02 -- nvmf/common.sh@51 -- # : 0 00:31:08.166 05:26:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.166 05:26:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.166 05:26:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.166 05:26:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.166 05:26:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.166 05:26:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.166 05:26:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.166 05:26:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.166 05:26:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.166 05:26:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:31:08.166 05:26:02 -- spdk/autotest.sh@32 -- # uname -s 00:31:08.166 05:26:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:31:08.166 05:26:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:31:08.166 05:26:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:31:08.166 05:26:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:31:08.166 05:26:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:31:08.166 05:26:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:31:08.166 05:26:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:31:08.166 05:26:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:31:08.166 05:26:02 -- spdk/autotest.sh@48 -- # udevadm_pid=509632 00:31:08.166 05:26:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:31:08.166 05:26:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:31:08.166 05:26:02 -- pm/common@17 -- # local monitor 00:31:08.166 05:26:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:08.166 05:26:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:08.166 05:26:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:08.166 05:26:02 -- pm/common@21 -- # date +%s 00:31:08.166 05:26:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:08.166 05:26:02 -- pm/common@21 -- # date +%s 00:31:08.166 05:26:02 -- pm/common@25 -- # sleep 1 00:31:08.166 05:26:02 -- pm/common@21 -- # date +%s 00:31:08.166 05:26:02 -- pm/common@21 -- # date +%s 00:31:08.166 05:26:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733718362 00:31:08.166 05:26:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733718362 00:31:08.166 05:26:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733718362 00:31:08.166 05:26:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733718362 00:31:08.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733718362_collect-vmstat.pm.log 00:31:08.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733718362_collect-cpu-load.pm.log 00:31:08.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733718362_collect-cpu-temp.pm.log 00:31:08.167 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733718362_collect-bmc-pm.bmc.pm.log 00:31:09.105 05:26:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:31:09.105 05:26:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:31:09.105 05:26:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.105 05:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.105 05:26:03 -- spdk/autotest.sh@59 -- # create_test_list 00:31:09.105 05:26:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:31:09.105 05:26:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.105 05:26:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:31:09.105 05:26:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:09.105 05:26:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:09.105 05:26:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:09.105 05:26:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:09.105 05:26:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:31:09.105 05:26:03 -- common/autotest_common.sh@1457 -- # uname 00:31:09.105 05:26:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:31:09.105 05:26:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:31:09.105 05:26:03 -- common/autotest_common.sh@1477 -- # uname 00:31:09.105 05:26:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:31:09.105 05:26:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:31:09.105 05:26:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:31:09.364 lcov: LCOV version 1.15 00:31:09.364 05:26:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:31:27.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:31:27.460 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:31:45.680 05:26:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:31:45.680 05:26:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:45.680 05:26:39 -- common/autotest_common.sh@10 -- # set +x 00:31:45.680 05:26:39 -- spdk/autotest.sh@78 -- # rm -f 00:31:45.680 05:26:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:47.051 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:31:47.051 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:31:47.051 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:31:47.051 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:31:47.051 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:31:47.051 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:31:47.051 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:31:47.051 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:31:47.051 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:31:47.051 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:31:47.051 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:31:47.051 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:31:47.051 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:31:47.051 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:31:47.051 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:31:47.051 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:31:47.051 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:31:47.051 05:26:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:31:47.051 05:26:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:31:47.051 05:26:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:31:47.051 05:26:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:31:47.051 05:26:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:47.051 05:26:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:31:47.051 05:26:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:47.051 05:26:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:47.051 05:26:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:47.051 05:26:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:31:47.051 05:26:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:47.051 05:26:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:47.051 05:26:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:31:47.051 05:26:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:31:47.051 05:26:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:31:47.310 No valid GPT data, bailing 00:31:47.310 05:26:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:47.310 05:26:41 -- scripts/common.sh@394 -- # pt= 00:31:47.310 05:26:41 -- scripts/common.sh@395 -- # return 1 00:31:47.311 05:26:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:31:47.311 1+0 records in 00:31:47.311 1+0 records out 00:31:47.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00233194 s, 450 MB/s 00:31:47.311 05:26:41 -- spdk/autotest.sh@105 -- # sync 00:31:47.311 05:26:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:31:47.311 05:26:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:31:47.311 05:26:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:31:49.428 05:26:43 -- spdk/autotest.sh@111 -- # uname -s 00:31:49.428 05:26:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:31:49.428 05:26:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:31:49.428 05:26:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:31:50.364 Hugepages 00:31:50.364 node hugesize free / total 00:31:50.364 node0 1048576kB 0 / 0 00:31:50.364 node0 2048kB 0 / 0 00:31:50.364 node1 1048576kB 0 / 0 00:31:50.364 node1 2048kB 0 / 0 00:31:50.364 00:31:50.364 Type BDF Vendor Device NUMA Driver Device Block devices 00:31:50.364 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:31:50.364 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:31:50.364 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:31:50.364 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:31:50.364 05:26:44 -- spdk/autotest.sh@117 -- # uname -s 00:31:50.364 05:26:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:31:50.364 05:26:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:31:50.364 05:26:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:51.740 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:51.740 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:51.740 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:52.674 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:52.932 05:26:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:31:53.865 05:26:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:31:53.865 05:26:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:31:53.865 05:26:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:31:53.865 05:26:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:31:53.865 05:26:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:53.865 05:26:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:53.865 05:26:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:53.865 05:26:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:53.865 05:26:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:53.865 05:26:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:53.865 05:26:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:53.865 05:26:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:55.237 Waiting for block devices as requested 00:31:55.237 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:55.237 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:55.237 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:55.495 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:55.495 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:55.495 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:55.495 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:55.754 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:55.754 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:55.754 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:56.011 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:56.011 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:56.011 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:56.011 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:56.269 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:56.269 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:56.269 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:56.527 05:26:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:31:56.527 05:26:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:31:56.527 05:26:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:31:56.527 05:26:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:31:56.527 05:26:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:31:56.527 05:26:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:31:56.527 05:26:50 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:31:56.527 05:26:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:31:56.527 05:26:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:31:56.527 05:26:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:31:56.527 05:26:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:31:56.527 05:26:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:31:56.527 05:26:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:31:56.527 05:26:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:31:56.527 05:26:50 -- common/autotest_common.sh@1543 -- # continue 00:31:56.527 05:26:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:31:56.527 05:26:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.527 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.527 05:26:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:31:56.527 05:26:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.527 05:26:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.527 05:26:50 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:57.902 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:57.902 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:57.902 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:58.838 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:58.838 05:26:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:31:58.838 05:26:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:58.838 05:26:52 -- common/autotest_common.sh@10 -- # set +x 00:31:58.838 05:26:52 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:31:58.838 05:26:52 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:31:58.838 05:26:52 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:31:58.838 05:26:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:31:58.838 05:26:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:31:58.838 05:26:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:31:58.838 05:26:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:31:58.838 05:26:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:31:58.838 05:26:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:58.838 05:26:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:58.838 05:26:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:58.838 05:26:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:58.838 05:26:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:58.838 05:26:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:58.838 05:26:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:59.097 05:26:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:31:59.097 05:26:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:31:59.097 05:26:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:31:59.097 05:26:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:31:59.097 05:26:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:31:59.097 05:26:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:31:59.097 05:26:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:31:59.097 05:26:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:31:59.097 05:26:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=520024 00:31:59.097 05:26:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:59.097 05:26:53 -- common/autotest_common.sh@1585 -- # waitforlisten 520024 00:31:59.097 05:26:53 -- common/autotest_common.sh@835 -- # '[' -z 520024 ']' 00:31:59.097 05:26:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.097 05:26:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.097 05:26:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.097 05:26:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.097 05:26:53 -- common/autotest_common.sh@10 -- # set +x 00:31:59.097 [2024-12-09 05:26:53.122930] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:31:59.097 [2024-12-09 05:26:53.123013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520024 ] 00:31:59.097 [2024-12-09 05:26:53.186555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.097 [2024-12-09 05:26:53.240356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.356 05:26:53 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.356 05:26:53 -- common/autotest_common.sh@868 -- # return 0 00:31:59.356 05:26:53 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:31:59.356 05:26:53 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:31:59.356 05:26:53 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:32:02.635 nvme0n1 00:32:02.635 05:26:56 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:32:02.635 [2024-12-09 05:26:56.839900] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:32:02.635 [2024-12-09 05:26:56.839949] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:32:02.635 request: 00:32:02.635 { 00:32:02.635 "nvme_ctrlr_name": "nvme0", 00:32:02.635 "password": "test", 00:32:02.635 "method": "bdev_nvme_opal_revert", 00:32:02.635 "req_id": 1 00:32:02.635 } 00:32:02.635 Got JSON-RPC error response 00:32:02.635 response: 00:32:02.635 { 00:32:02.635 "code": -32603, 00:32:02.635 "message": "Internal error" 00:32:02.635 } 00:32:02.635 05:26:56 -- common/autotest_common.sh@1591 -- # true 00:32:02.635 05:26:56 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:32:02.635 05:26:56 -- common/autotest_common.sh@1595 -- # killprocess 520024 00:32:02.635 05:26:56 -- common/autotest_common.sh@954 -- # '[' -z 520024 ']' 00:32:02.635 05:26:56 -- common/autotest_common.sh@958 -- # kill -0 520024 00:32:02.893 05:26:56 -- common/autotest_common.sh@959 -- # uname 00:32:02.893 05:26:56 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.893 05:26:56 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520024 00:32:02.893 05:26:56 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.893 05:26:56 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.893 05:26:56 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520024' 00:32:02.893 killing process with pid 520024 00:32:02.893 05:26:56 -- common/autotest_common.sh@973 -- # kill 520024 00:32:02.893 05:26:56 -- common/autotest_common.sh@978 -- # wait 520024 00:32:04.789 05:26:58 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:32:04.789 05:26:58 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:32:04.789 05:26:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:32:04.789 05:26:58 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:32:04.789 05:26:58 -- spdk/autotest.sh@149 -- # timing_enter lib 00:32:04.789 05:26:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.789 05:26:58 -- common/autotest_common.sh@10 -- # set +x 00:32:04.789 05:26:58 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:32:04.789 05:26:58 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:32:04.789 05:26:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:04.789 05:26:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.789 05:26:58 -- common/autotest_common.sh@10 -- # set +x 00:32:04.789 ************************************ 00:32:04.789 START TEST env 00:32:04.789 ************************************ 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:32:04.789 * Looking for test storage... 00:32:04.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.789 05:26:58 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.789 05:26:58 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.789 05:26:58 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.789 05:26:58 env -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.789 05:26:58 env -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.789 05:26:58 env -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.789 05:26:58 env -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.789 05:26:58 env -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.789 05:26:58 env -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.789 05:26:58 env -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.789 05:26:58 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.789 05:26:58 env -- scripts/common.sh@344 -- # case "$op" in 00:32:04.789 05:26:58 env -- scripts/common.sh@345 -- # : 1 00:32:04.789 05:26:58 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.789 05:26:58 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.789 05:26:58 env -- scripts/common.sh@365 -- # decimal 1 00:32:04.789 05:26:58 env -- scripts/common.sh@353 -- # local d=1 00:32:04.789 05:26:58 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.789 05:26:58 env -- scripts/common.sh@355 -- # echo 1 00:32:04.789 05:26:58 env -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.789 05:26:58 env -- scripts/common.sh@366 -- # decimal 2 00:32:04.789 05:26:58 env -- scripts/common.sh@353 -- # local d=2 00:32:04.789 05:26:58 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.789 05:26:58 env -- scripts/common.sh@355 -- # echo 2 00:32:04.789 05:26:58 env -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.789 05:26:58 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.789 05:26:58 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.789 05:26:58 env -- scripts/common.sh@368 -- # return 0 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.789 --rc genhtml_branch_coverage=1 00:32:04.789 --rc genhtml_function_coverage=1 00:32:04.789 --rc genhtml_legend=1 00:32:04.789 --rc geninfo_all_blocks=1 00:32:04.789 --rc geninfo_unexecuted_blocks=1 00:32:04.789 00:32:04.789 ' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.789 --rc genhtml_branch_coverage=1 00:32:04.789 --rc genhtml_function_coverage=1 00:32:04.789 --rc genhtml_legend=1 00:32:04.789 --rc geninfo_all_blocks=1 00:32:04.789 --rc geninfo_unexecuted_blocks=1 00:32:04.789 00:32:04.789 ' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.789 --rc genhtml_branch_coverage=1 00:32:04.789 --rc genhtml_function_coverage=1 00:32:04.789 --rc genhtml_legend=1 00:32:04.789 --rc geninfo_all_blocks=1 00:32:04.789 --rc geninfo_unexecuted_blocks=1 00:32:04.789 00:32:04.789 ' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.789 --rc genhtml_branch_coverage=1 00:32:04.789 --rc genhtml_function_coverage=1 00:32:04.789 --rc genhtml_legend=1 00:32:04.789 --rc geninfo_all_blocks=1 00:32:04.789 --rc geninfo_unexecuted_blocks=1 00:32:04.789 00:32:04.789 ' 00:32:04.789 05:26:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:04.789 05:26:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.789 05:26:58 env -- common/autotest_common.sh@10 -- # set +x 00:32:04.789 ************************************ 00:32:04.789 START TEST env_memory 00:32:04.789 ************************************ 00:32:04.789 05:26:58 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:32:04.789 00:32:04.789 00:32:04.789 CUnit - A unit testing framework for C - Version 2.1-3 00:32:04.789 http://cunit.sourceforge.net/ 00:32:04.789 00:32:04.789 00:32:04.789 Suite: memory 00:32:04.789 Test: alloc and free memory map ...[2024-12-09 05:26:58.936812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 284:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:32:04.790 passed 00:32:04.790 Test: mem map translation ...[2024-12-09 05:26:58.957052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:32:04.790 [2024-12-09 05:26:58.957074] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:32:04.790 [2024-12-09 05:26:58.957129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:32:04.790 [2024-12-09 05:26:58.957141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 606:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:32:04.790 passed 00:32:04.790 Test: mem map registration ...[2024-12-09 05:26:58.998089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:32:04.790 [2024-12-09 05:26:58.998107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:32:04.790 passed 00:32:05.048 Test: mem map adjacent registrations ...passed 00:32:05.048 00:32:05.048 Run Summary: Type Total Ran Passed Failed Inactive 00:32:05.048 suites 1 1 n/a 0 0 00:32:05.048 tests 4 4 4 0 0 00:32:05.048 asserts 152 152 152 0 n/a 00:32:05.048 00:32:05.048 Elapsed time = 0.148 seconds 00:32:05.048 00:32:05.048 real 0m0.157s 00:32:05.048 user 0m0.151s 00:32:05.048 sys 0m0.006s 00:32:05.048 05:26:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.048 05:26:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:32:05.048 ************************************ 00:32:05.049 END TEST env_memory 00:32:05.049 ************************************ 00:32:05.049 05:26:59 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:32:05.049 05:26:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:05.049 05:26:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.049 05:26:59 env -- common/autotest_common.sh@10 -- # set +x 00:32:05.049 ************************************ 00:32:05.049 START TEST env_vtophys 00:32:05.049 ************************************ 00:32:05.049 05:26:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:32:05.049 EAL: lib.eal log level changed from notice to debug 00:32:05.049 EAL: Detected lcore 0 as core 0 on socket 0 00:32:05.049 EAL: Detected lcore 1 as core 1 on socket 0 00:32:05.049 EAL: Detected lcore 2 as core 2 on socket 0 00:32:05.049 EAL: Detected lcore 3 as core 3 on socket 0 00:32:05.049 EAL: Detected lcore 4 as core 4 on socket 0 00:32:05.049 EAL: Detected lcore 5 as core 5 on socket 0 00:32:05.049 EAL: Detected lcore 6 as core 8 on socket 0 00:32:05.049 EAL: Detected lcore 7 as core 9 on socket 0 00:32:05.049 EAL: Detected lcore 8 as core 10 on socket 0 00:32:05.049 EAL: Detected lcore 9 as core 11 on socket 0 00:32:05.049 EAL: Detected lcore 10 as core 12 on socket 0 00:32:05.049 EAL: Detected lcore 11 as core 13 on socket 0 00:32:05.049 EAL: Detected lcore 12 as core 0 on socket 1 00:32:05.049 EAL: Detected lcore 13 as core 1 on socket 1 00:32:05.049 EAL: Detected lcore 14 as core 2 on socket 1 00:32:05.049 EAL: Detected lcore 15 as core 3 on socket 1 00:32:05.049 EAL: Detected lcore 16 as core 4 on socket 1 00:32:05.049 EAL: Detected lcore 17 as core 5 on socket 1 00:32:05.049 EAL: Detected lcore 18 as core 8 on socket 1 00:32:05.049 EAL: Detected lcore 19 as core 9 on socket 1 00:32:05.049 EAL: Detected lcore 20 as core 10 on socket 1 00:32:05.049 EAL: Detected lcore 21 as core 11 on socket 1 00:32:05.049 EAL: Detected lcore 22 as core 12 on socket 1 00:32:05.049 EAL: Detected lcore 23 as core 13 on socket 1 00:32:05.049 EAL: Detected lcore 24 as core 0 on socket 0 00:32:05.049 EAL: Detected lcore 25 as core 1 on socket 0 00:32:05.049 EAL: Detected lcore 26 as core 2 on socket 0 00:32:05.049 EAL: Detected lcore 27 as core 3 on socket 0 00:32:05.049 EAL: Detected lcore 28 as core 4 on socket 0 00:32:05.049 EAL: Detected lcore 29 as core 5 on socket 0 00:32:05.049 EAL: Detected lcore 30 as core 8 on socket 0 00:32:05.049 EAL: Detected lcore 31 as core 9 on socket 0 00:32:05.049 EAL: Detected lcore 32 as core 10 on socket 0 00:32:05.049 EAL: Detected lcore 33 as core 11 on socket 0 00:32:05.049 EAL: Detected lcore 34 as core 12 on socket 0 00:32:05.049 EAL: Detected lcore 35 as core 13 on socket 0 00:32:05.049 EAL: Detected lcore 36 as core 0 on socket 1 00:32:05.049 EAL: Detected lcore 37 as core 1 on socket 1 00:32:05.049 EAL: Detected lcore 38 as core 2 on socket 1 00:32:05.049 EAL: Detected lcore 39 as core 3 on socket 1 00:32:05.049 EAL: Detected lcore 40 as core 4 on socket 1 00:32:05.049 EAL: Detected lcore 41 as core 5 on socket 1 00:32:05.049 EAL: Detected lcore 42 as core 8 on socket 1 00:32:05.049 EAL: Detected lcore 43 as core 9 on socket 1 00:32:05.049 EAL: Detected lcore 44 as core 10 on socket 1 00:32:05.049 EAL: Detected lcore 45 as core 11 on socket 1 00:32:05.049 EAL: Detected lcore 46 as core 12 on socket 1 00:32:05.049 EAL: Detected lcore 47 as core 13 on socket 1 00:32:05.049 EAL: Maximum logical cores by configuration: 128 00:32:05.049 EAL: Detected CPU lcores: 48 00:32:05.049 EAL: Detected NUMA nodes: 2 00:32:05.049 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:32:05.049 EAL: Detected shared linkage of DPDK 00:32:05.049 EAL: No shared files mode enabled, IPC will be disabled 00:32:05.049 EAL: Bus pci wants IOVA as 'DC' 00:32:05.049 EAL: Buses did not request a specific IOVA mode. 00:32:05.049 EAL: IOMMU is available, selecting IOVA as VA mode. 00:32:05.049 EAL: Selected IOVA mode 'VA' 00:32:05.049 EAL: Probing VFIO support... 00:32:05.049 EAL: IOMMU type 1 (Type 1) is supported 00:32:05.049 EAL: IOMMU type 7 (sPAPR) is not supported 00:32:05.049 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:32:05.049 EAL: VFIO support initialized 00:32:05.049 EAL: Ask a virtual area of 0x2e000 bytes 00:32:05.049 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:32:05.049 EAL: Setting up physically contiguous memory... 00:32:05.049 EAL: Setting maximum number of open files to 524288 00:32:05.049 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:32:05.049 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:32:05.049 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:32:05.049 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:32:05.049 EAL: Ask a virtual area of 0x61000 bytes 00:32:05.049 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:32:05.049 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:32:05.049 EAL: Ask a virtual area of 0x400000000 bytes 00:32:05.049 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:32:05.049 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:32:05.049 EAL: Hugepages will be freed exactly as allocated. 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: TSC frequency is ~2700000 KHz 00:32:05.049 EAL: Main lcore 0 is ready (tid=7f214f105a00;cpuset=[0]) 00:32:05.049 EAL: Trying to obtain current memory policy. 00:32:05.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.049 EAL: Restoring previous memory policy: 0 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was expanded by 2MB 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: No PCI address specified using 'addr=' in: bus=pci 00:32:05.049 EAL: Mem event callback 'spdk:(nil)' registered 00:32:05.049 00:32:05.049 00:32:05.049 CUnit - A unit testing framework for C - Version 2.1-3 00:32:05.049 http://cunit.sourceforge.net/ 00:32:05.049 00:32:05.049 00:32:05.049 Suite: components_suite 00:32:05.049 Test: vtophys_malloc_test ...passed 00:32:05.049 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:32:05.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.049 EAL: Restoring previous memory policy: 4 00:32:05.049 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was expanded by 4MB 00:32:05.049 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was shrunk by 4MB 00:32:05.049 EAL: Trying to obtain current memory policy. 00:32:05.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.049 EAL: Restoring previous memory policy: 4 00:32:05.049 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was expanded by 6MB 00:32:05.049 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was shrunk by 6MB 00:32:05.049 EAL: Trying to obtain current memory policy. 00:32:05.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.049 EAL: Restoring previous memory policy: 4 00:32:05.049 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.049 EAL: request: mp_malloc_sync 00:32:05.049 EAL: No shared files mode enabled, IPC is disabled 00:32:05.049 EAL: Heap on socket 0 was expanded by 10MB 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was shrunk by 10MB 00:32:05.050 EAL: Trying to obtain current memory policy. 00:32:05.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.050 EAL: Restoring previous memory policy: 4 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was expanded by 18MB 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was shrunk by 18MB 00:32:05.050 EAL: Trying to obtain current memory policy. 00:32:05.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.050 EAL: Restoring previous memory policy: 4 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was expanded by 34MB 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was shrunk by 34MB 00:32:05.050 EAL: Trying to obtain current memory policy. 00:32:05.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.050 EAL: Restoring previous memory policy: 4 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was expanded by 66MB 00:32:05.050 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.050 EAL: request: mp_malloc_sync 00:32:05.050 EAL: No shared files mode enabled, IPC is disabled 00:32:05.050 EAL: Heap on socket 0 was shrunk by 66MB 00:32:05.050 EAL: Trying to obtain current memory policy. 00:32:05.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.308 EAL: Restoring previous memory policy: 4 00:32:05.308 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.308 EAL: request: mp_malloc_sync 00:32:05.308 EAL: No shared files mode enabled, IPC is disabled 00:32:05.308 EAL: Heap on socket 0 was expanded by 130MB 00:32:05.308 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.308 EAL: request: mp_malloc_sync 00:32:05.308 EAL: No shared files mode enabled, IPC is disabled 00:32:05.308 EAL: Heap on socket 0 was shrunk by 130MB 00:32:05.308 EAL: Trying to obtain current memory policy. 00:32:05.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.308 EAL: Restoring previous memory policy: 4 00:32:05.308 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.308 EAL: request: mp_malloc_sync 00:32:05.308 EAL: No shared files mode enabled, IPC is disabled 00:32:05.308 EAL: Heap on socket 0 was expanded by 258MB 00:32:05.308 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.308 EAL: request: mp_malloc_sync 00:32:05.308 EAL: No shared files mode enabled, IPC is disabled 00:32:05.308 EAL: Heap on socket 0 was shrunk by 258MB 00:32:05.308 EAL: Trying to obtain current memory policy. 00:32:05.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:05.566 EAL: Restoring previous memory policy: 4 00:32:05.566 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.566 EAL: request: mp_malloc_sync 00:32:05.566 EAL: No shared files mode enabled, IPC is disabled 00:32:05.566 EAL: Heap on socket 0 was expanded by 514MB 00:32:05.566 EAL: Calling mem event callback 'spdk:(nil)' 00:32:05.824 EAL: request: mp_malloc_sync 00:32:05.824 EAL: No shared files mode enabled, IPC is disabled 00:32:05.824 EAL: Heap on socket 0 was shrunk by 514MB 00:32:05.824 EAL: Trying to obtain current memory policy. 00:32:05.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:06.081 EAL: Restoring previous memory policy: 4 00:32:06.081 EAL: Calling mem event callback 'spdk:(nil)' 00:32:06.081 EAL: request: mp_malloc_sync 00:32:06.081 EAL: No shared files mode enabled, IPC is disabled 00:32:06.081 EAL: Heap on socket 0 was expanded by 1026MB 00:32:06.339 EAL: Calling mem event callback 'spdk:(nil)' 00:32:06.339 EAL: request: mp_malloc_sync 00:32:06.339 EAL: No shared files mode enabled, IPC is disabled 00:32:06.339 EAL: Heap on socket 0 was shrunk by 1026MB 00:32:06.339 passed 00:32:06.339 00:32:06.339 Run Summary: Type Total Ran Passed Failed Inactive 00:32:06.339 suites 1 1 n/a 0 0 00:32:06.339 tests 2 2 2 0 0 00:32:06.339 asserts 497 497 497 0 n/a 00:32:06.339 00:32:06.339 Elapsed time = 1.311 seconds 00:32:06.339 EAL: Calling mem event callback 'spdk:(nil)' 00:32:06.339 EAL: request: mp_malloc_sync 00:32:06.339 EAL: No shared files mode enabled, IPC is disabled 00:32:06.339 EAL: Heap on socket 0 was shrunk by 2MB 00:32:06.339 EAL: No shared files mode enabled, IPC is disabled 00:32:06.339 EAL: No shared files mode enabled, IPC is disabled 00:32:06.339 EAL: No shared files mode enabled, IPC is disabled 00:32:06.339 00:32:06.339 real 0m1.428s 00:32:06.339 user 0m0.835s 00:32:06.339 sys 0m0.563s 00:32:06.339 05:27:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.339 05:27:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:32:06.339 ************************************ 00:32:06.339 END TEST env_vtophys 00:32:06.339 ************************************ 00:32:06.339 05:27:00 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:32:06.339 05:27:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:06.339 05:27:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.339 05:27:00 env -- common/autotest_common.sh@10 -- # set +x 00:32:06.598 ************************************ 00:32:06.598 START TEST env_pci 00:32:06.598 ************************************ 00:32:06.598 05:27:00 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:32:06.598 00:32:06.598 00:32:06.598 CUnit - A unit testing framework for C - Version 2.1-3 00:32:06.598 http://cunit.sourceforge.net/ 00:32:06.598 00:32:06.598 00:32:06.598 Suite: pci 00:32:06.598 Test: pci_hook ...[2024-12-09 05:27:00.592691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 520964 has claimed it 00:32:06.598 EAL: Cannot find device (10000:00:01.0) 00:32:06.598 EAL: Failed to attach device on primary process 00:32:06.598 passed 00:32:06.598 00:32:06.598 Run Summary: Type Total Ran Passed Failed Inactive 00:32:06.598 suites 1 1 n/a 0 0 00:32:06.598 tests 1 1 1 0 0 00:32:06.598 asserts 25 25 25 0 n/a 00:32:06.598 00:32:06.599 Elapsed time = 0.022 seconds 00:32:06.599 00:32:06.599 real 0m0.035s 00:32:06.599 user 0m0.012s 00:32:06.599 sys 0m0.023s 00:32:06.599 05:27:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.599 05:27:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:32:06.599 ************************************ 00:32:06.599 END TEST env_pci 00:32:06.599 ************************************ 00:32:06.599 05:27:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:32:06.599 05:27:00 env -- env/env.sh@15 -- # uname 00:32:06.599 05:27:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:32:06.599 05:27:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:32:06.599 05:27:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:32:06.599 05:27:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:06.599 05:27:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.599 05:27:00 env -- common/autotest_common.sh@10 -- # set +x 00:32:06.599 ************************************ 00:32:06.599 START TEST env_dpdk_post_init 00:32:06.599 ************************************ 00:32:06.599 05:27:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:32:06.599 EAL: Detected CPU lcores: 48 00:32:06.599 EAL: Detected NUMA nodes: 2 00:32:06.599 EAL: Detected shared linkage of DPDK 00:32:06.599 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:32:06.599 EAL: Selected IOVA mode 'VA' 00:32:06.599 EAL: VFIO support initialized 00:32:06.599 TELEMETRY: No legacy callbacks, legacy socket not created 00:32:06.599 EAL: Using IOMMU type 1 (Type 1) 00:32:06.599 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:32:06.599 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:32:06.599 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:32:06.599 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:32:06.857 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:32:07.793 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:32:11.073 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:32:11.073 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:32:11.073 Starting DPDK initialization... 00:32:11.073 Starting SPDK post initialization... 00:32:11.073 SPDK NVMe probe 00:32:11.073 Attaching to 0000:88:00.0 00:32:11.073 Attached to 0000:88:00.0 00:32:11.073 Cleaning up... 00:32:11.073 00:32:11.073 real 0m4.400s 00:32:11.073 user 0m3.026s 00:32:11.073 sys 0m0.433s 00:32:11.073 05:27:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.073 05:27:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:32:11.073 ************************************ 00:32:11.073 END TEST env_dpdk_post_init 00:32:11.073 ************************************ 00:32:11.073 05:27:05 env -- env/env.sh@26 -- # uname 00:32:11.073 05:27:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:32:11.073 05:27:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:32:11.073 05:27:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.073 05:27:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.073 05:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:32:11.073 ************************************ 00:32:11.073 START TEST env_mem_callbacks 00:32:11.073 ************************************ 00:32:11.073 05:27:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:32:11.073 EAL: Detected CPU lcores: 48 00:32:11.073 EAL: Detected NUMA nodes: 2 00:32:11.073 EAL: Detected shared linkage of DPDK 00:32:11.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:32:11.073 EAL: Selected IOVA mode 'VA' 00:32:11.073 EAL: VFIO support initialized 00:32:11.073 TELEMETRY: No legacy callbacks, legacy socket not created 00:32:11.073 00:32:11.073 00:32:11.073 CUnit - A unit testing framework for C - Version 2.1-3 00:32:11.073 http://cunit.sourceforge.net/ 00:32:11.073 00:32:11.073 00:32:11.073 Suite: memory 00:32:11.073 Test: test ... 00:32:11.073 register 0x200000200000 2097152 00:32:11.073 malloc 3145728 00:32:11.073 register 0x200000400000 4194304 00:32:11.073 buf 0x200000500000 len 3145728 PASSED 00:32:11.073 malloc 64 00:32:11.073 buf 0x2000004fff40 len 64 PASSED 00:32:11.073 malloc 4194304 00:32:11.073 register 0x200000800000 6291456 00:32:11.073 buf 0x200000a00000 len 4194304 PASSED 00:32:11.073 free 0x200000500000 3145728 00:32:11.073 free 0x2000004fff40 64 00:32:11.073 unregister 0x200000400000 4194304 PASSED 00:32:11.073 free 0x200000a00000 4194304 00:32:11.073 unregister 0x200000800000 6291456 PASSED 00:32:11.073 malloc 8388608 00:32:11.073 register 0x200000400000 10485760 00:32:11.073 buf 0x200000600000 len 8388608 PASSED 00:32:11.073 free 0x200000600000 8388608 00:32:11.073 unregister 0x200000400000 10485760 PASSED 00:32:11.073 passed 00:32:11.073 00:32:11.073 Run Summary: Type Total Ran Passed Failed Inactive 00:32:11.073 suites 1 1 n/a 0 0 00:32:11.073 tests 1 1 1 0 0 00:32:11.073 asserts 15 15 15 0 n/a 00:32:11.073 00:32:11.073 Elapsed time = 0.005 seconds 00:32:11.073 00:32:11.073 real 0m0.049s 00:32:11.073 user 0m0.014s 00:32:11.073 sys 0m0.034s 00:32:11.073 05:27:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.073 05:27:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:32:11.073 ************************************ 00:32:11.073 END TEST env_mem_callbacks 00:32:11.073 ************************************ 00:32:11.073 00:32:11.073 real 0m6.460s 00:32:11.073 user 0m4.233s 00:32:11.073 sys 0m1.278s 00:32:11.073 05:27:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.073 05:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:32:11.073 ************************************ 00:32:11.073 END TEST env 00:32:11.073 ************************************ 00:32:11.073 05:27:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:32:11.073 05:27:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.073 05:27:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.073 05:27:05 -- common/autotest_common.sh@10 -- # set +x 00:32:11.073 ************************************ 00:32:11.073 START TEST rpc 00:32:11.073 ************************************ 00:32:11.073 05:27:05 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:32:11.073 * Looking for test storage... 00:32:11.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:32:11.073 05:27:05 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:11.073 05:27:05 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:11.073 05:27:05 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.331 05:27:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.331 05:27:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.331 05:27:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.331 05:27:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.331 05:27:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.331 05:27:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:11.331 05:27:05 rpc -- scripts/common.sh@345 -- # : 1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.331 05:27:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.331 05:27:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@353 -- # local d=1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.331 05:27:05 rpc -- scripts/common.sh@355 -- # echo 1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.331 05:27:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@353 -- # local d=2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.331 05:27:05 rpc -- scripts/common.sh@355 -- # echo 2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.331 05:27:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.331 05:27:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.331 05:27:05 rpc -- scripts/common.sh@368 -- # return 0 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:11.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.331 --rc genhtml_branch_coverage=1 00:32:11.331 --rc genhtml_function_coverage=1 00:32:11.331 --rc genhtml_legend=1 00:32:11.331 --rc geninfo_all_blocks=1 00:32:11.331 --rc geninfo_unexecuted_blocks=1 00:32:11.331 00:32:11.331 ' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:11.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.331 --rc genhtml_branch_coverage=1 00:32:11.331 --rc genhtml_function_coverage=1 00:32:11.331 --rc genhtml_legend=1 00:32:11.331 --rc geninfo_all_blocks=1 00:32:11.331 --rc geninfo_unexecuted_blocks=1 00:32:11.331 00:32:11.331 ' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:11.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.331 --rc genhtml_branch_coverage=1 00:32:11.331 --rc genhtml_function_coverage=1 00:32:11.331 --rc genhtml_legend=1 00:32:11.331 --rc geninfo_all_blocks=1 00:32:11.331 --rc geninfo_unexecuted_blocks=1 00:32:11.331 00:32:11.331 ' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:11.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.331 --rc genhtml_branch_coverage=1 00:32:11.331 --rc genhtml_function_coverage=1 00:32:11.331 --rc genhtml_legend=1 00:32:11.331 --rc geninfo_all_blocks=1 00:32:11.331 --rc geninfo_unexecuted_blocks=1 00:32:11.331 00:32:11.331 ' 00:32:11.331 05:27:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=521817 00:32:11.331 05:27:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:32:11.331 05:27:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:11.331 05:27:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 521817 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 521817 ']' 00:32:11.331 05:27:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.332 05:27:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.332 05:27:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.332 05:27:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.332 05:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:11.332 [2024-12-09 05:27:05.434222] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:11.332 [2024-12-09 05:27:05.434329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521817 ] 00:32:11.332 [2024-12-09 05:27:05.499894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.590 [2024-12-09 05:27:05.556985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:32:11.590 [2024-12-09 05:27:05.557047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 521817' to capture a snapshot of events at runtime. 00:32:11.590 [2024-12-09 05:27:05.557067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.590 [2024-12-09 05:27:05.557077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.590 [2024-12-09 05:27:05.557085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid521817 for offline analysis/debug. 00:32:11.590 [2024-12-09 05:27:05.557691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.848 05:27:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.848 05:27:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:32:11.848 05:27:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:32:11.848 05:27:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:32:11.848 05:27:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:32:11.848 05:27:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:32:11.848 05:27:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:11.848 05:27:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.848 05:27:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 ************************************ 00:32:11.848 START TEST rpc_integrity 00:32:11.848 ************************************ 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:32:11.848 { 00:32:11.848 "name": "Malloc0", 00:32:11.848 "aliases": [ 00:32:11.848 "dd2418cd-ecd1-490a-a813-b2f255a87b1d" 00:32:11.848 ], 00:32:11.848 "product_name": "Malloc disk", 00:32:11.848 "block_size": 512, 00:32:11.848 "num_blocks": 16384, 00:32:11.848 "uuid": "dd2418cd-ecd1-490a-a813-b2f255a87b1d", 00:32:11.848 "assigned_rate_limits": { 00:32:11.848 "rw_ios_per_sec": 0, 00:32:11.848 "rw_mbytes_per_sec": 0, 00:32:11.848 "r_mbytes_per_sec": 0, 00:32:11.848 "w_mbytes_per_sec": 0 00:32:11.848 }, 00:32:11.848 "claimed": false, 00:32:11.848 "zoned": false, 00:32:11.848 "supported_io_types": { 00:32:11.848 "read": true, 00:32:11.848 "write": true, 00:32:11.848 "unmap": true, 00:32:11.848 "flush": true, 00:32:11.848 "reset": true, 00:32:11.848 "nvme_admin": false, 00:32:11.848 "nvme_io": false, 00:32:11.848 "nvme_io_md": false, 00:32:11.848 "write_zeroes": true, 00:32:11.848 "zcopy": true, 00:32:11.848 "get_zone_info": false, 00:32:11.848 "zone_management": false, 00:32:11.848 "zone_append": false, 00:32:11.848 "compare": false, 00:32:11.848 "compare_and_write": false, 00:32:11.848 "abort": true, 00:32:11.848 "seek_hole": false, 00:32:11.848 "seek_data": false, 00:32:11.848 "copy": true, 00:32:11.848 "nvme_iov_md": false 00:32:11.848 }, 00:32:11.848 "memory_domains": [ 00:32:11.848 { 00:32:11.848 "dma_device_id": "system", 00:32:11.848 "dma_device_type": 1 00:32:11.848 }, 00:32:11.848 { 00:32:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.848 "dma_device_type": 2 00:32:11.848 } 00:32:11.848 ], 00:32:11.848 "driver_specific": {} 00:32:11.848 } 00:32:11.848 ]' 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 [2024-12-09 05:27:05.952097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:32:11.848 [2024-12-09 05:27:05.952134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.848 [2024-12-09 05:27:05.952156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1079d20 00:32:11.848 [2024-12-09 05:27:05.952167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.848 [2024-12-09 05:27:05.953521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.848 [2024-12-09 05:27:05.953548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:32:11.848 Passthru0 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:32:11.848 { 00:32:11.848 "name": "Malloc0", 00:32:11.848 "aliases": [ 00:32:11.848 "dd2418cd-ecd1-490a-a813-b2f255a87b1d" 00:32:11.848 ], 00:32:11.848 "product_name": "Malloc disk", 00:32:11.848 "block_size": 512, 00:32:11.848 "num_blocks": 16384, 00:32:11.848 "uuid": "dd2418cd-ecd1-490a-a813-b2f255a87b1d", 00:32:11.848 "assigned_rate_limits": { 00:32:11.848 "rw_ios_per_sec": 0, 00:32:11.848 "rw_mbytes_per_sec": 0, 00:32:11.848 "r_mbytes_per_sec": 0, 00:32:11.848 "w_mbytes_per_sec": 0 00:32:11.848 }, 00:32:11.848 "claimed": true, 00:32:11.848 "claim_type": "exclusive_write", 00:32:11.848 "zoned": false, 00:32:11.848 "supported_io_types": { 00:32:11.848 "read": true, 00:32:11.848 "write": true, 00:32:11.848 "unmap": true, 00:32:11.848 "flush": true, 00:32:11.848 "reset": true, 00:32:11.848 "nvme_admin": false, 00:32:11.848 "nvme_io": false, 00:32:11.848 "nvme_io_md": false, 00:32:11.848 "write_zeroes": true, 00:32:11.848 "zcopy": true, 00:32:11.848 "get_zone_info": false, 00:32:11.848 "zone_management": false, 00:32:11.848 "zone_append": false, 00:32:11.848 "compare": false, 00:32:11.848 "compare_and_write": false, 00:32:11.848 "abort": true, 00:32:11.848 "seek_hole": false, 00:32:11.848 "seek_data": false, 00:32:11.848 "copy": true, 00:32:11.848 "nvme_iov_md": false 00:32:11.848 }, 00:32:11.848 "memory_domains": [ 00:32:11.848 { 00:32:11.848 "dma_device_id": "system", 00:32:11.848 "dma_device_type": 1 00:32:11.848 }, 00:32:11.848 { 00:32:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.848 "dma_device_type": 2 00:32:11.848 } 00:32:11.848 ], 00:32:11.848 "driver_specific": {} 00:32:11.848 }, 00:32:11.848 { 00:32:11.848 "name": "Passthru0", 00:32:11.848 "aliases": [ 00:32:11.848 "5b8dd313-8b7c-568b-9ba1-254bcd9e444e" 00:32:11.848 ], 00:32:11.848 "product_name": "passthru", 00:32:11.848 "block_size": 512, 00:32:11.848 "num_blocks": 16384, 00:32:11.848 "uuid": "5b8dd313-8b7c-568b-9ba1-254bcd9e444e", 00:32:11.848 "assigned_rate_limits": { 00:32:11.848 "rw_ios_per_sec": 0, 00:32:11.848 "rw_mbytes_per_sec": 0, 00:32:11.848 "r_mbytes_per_sec": 0, 00:32:11.848 "w_mbytes_per_sec": 0 00:32:11.848 }, 00:32:11.848 "claimed": false, 00:32:11.848 "zoned": false, 00:32:11.848 "supported_io_types": { 00:32:11.848 "read": true, 00:32:11.848 "write": true, 00:32:11.848 "unmap": true, 00:32:11.848 "flush": true, 00:32:11.848 "reset": true, 00:32:11.848 "nvme_admin": false, 00:32:11.848 "nvme_io": false, 00:32:11.848 "nvme_io_md": false, 00:32:11.848 "write_zeroes": true, 00:32:11.848 "zcopy": true, 00:32:11.848 "get_zone_info": false, 00:32:11.848 "zone_management": false, 00:32:11.848 "zone_append": false, 00:32:11.848 "compare": false, 00:32:11.848 "compare_and_write": false, 00:32:11.848 "abort": true, 00:32:11.848 "seek_hole": false, 00:32:11.848 "seek_data": false, 00:32:11.848 "copy": true, 00:32:11.848 "nvme_iov_md": false 00:32:11.848 }, 00:32:11.848 "memory_domains": [ 00:32:11.848 { 00:32:11.848 "dma_device_id": "system", 00:32:11.848 "dma_device_type": 1 00:32:11.848 }, 00:32:11.848 { 00:32:11.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.848 "dma_device_type": 2 00:32:11.848 } 00:32:11.848 ], 00:32:11.848 "driver_specific": { 00:32:11.848 "passthru": { 00:32:11.848 "name": "Passthru0", 00:32:11.848 "base_bdev_name": "Malloc0" 00:32:11.848 } 00:32:11.848 } 00:32:11.848 } 00:32:11.848 ]' 00:32:11.848 05:27:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:32:11.848 05:27:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:32:11.848 00:32:11.848 real 0m0.214s 00:32:11.848 user 0m0.136s 00:32:11.848 sys 0m0.021s 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.848 05:27:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:11.848 ************************************ 00:32:11.848 END TEST rpc_integrity 00:32:11.848 ************************************ 00:32:12.106 05:27:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 ************************************ 00:32:12.106 START TEST rpc_plugins 00:32:12.106 ************************************ 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:32:12.106 { 00:32:12.106 "name": "Malloc1", 00:32:12.106 "aliases": [ 00:32:12.106 "8ea8dee1-258b-4a93-acc0-68d37df245b1" 00:32:12.106 ], 00:32:12.106 "product_name": "Malloc disk", 00:32:12.106 "block_size": 4096, 00:32:12.106 "num_blocks": 256, 00:32:12.106 "uuid": "8ea8dee1-258b-4a93-acc0-68d37df245b1", 00:32:12.106 "assigned_rate_limits": { 00:32:12.106 "rw_ios_per_sec": 0, 00:32:12.106 "rw_mbytes_per_sec": 0, 00:32:12.106 "r_mbytes_per_sec": 0, 00:32:12.106 "w_mbytes_per_sec": 0 00:32:12.106 }, 00:32:12.106 "claimed": false, 00:32:12.106 "zoned": false, 00:32:12.106 "supported_io_types": { 00:32:12.106 "read": true, 00:32:12.106 "write": true, 00:32:12.106 "unmap": true, 00:32:12.106 "flush": true, 00:32:12.106 "reset": true, 00:32:12.106 "nvme_admin": false, 00:32:12.106 "nvme_io": false, 00:32:12.106 "nvme_io_md": false, 00:32:12.106 "write_zeroes": true, 00:32:12.106 "zcopy": true, 00:32:12.106 "get_zone_info": false, 00:32:12.106 "zone_management": false, 00:32:12.106 "zone_append": false, 00:32:12.106 "compare": false, 00:32:12.106 "compare_and_write": false, 00:32:12.106 "abort": true, 00:32:12.106 "seek_hole": false, 00:32:12.106 "seek_data": false, 00:32:12.106 "copy": true, 00:32:12.106 "nvme_iov_md": false 00:32:12.106 }, 00:32:12.106 "memory_domains": [ 00:32:12.106 { 00:32:12.106 "dma_device_id": "system", 00:32:12.106 "dma_device_type": 1 00:32:12.106 }, 00:32:12.106 { 00:32:12.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.106 "dma_device_type": 2 00:32:12.106 } 00:32:12.106 ], 00:32:12.106 "driver_specific": {} 00:32:12.106 } 00:32:12.106 ]' 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:32:12.106 05:27:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:32:12.106 00:32:12.106 real 0m0.108s 00:32:12.106 user 0m0.069s 00:32:12.106 sys 0m0.008s 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 ************************************ 00:32:12.106 END TEST rpc_plugins 00:32:12.106 ************************************ 00:32:12.106 05:27:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.106 05:27:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 ************************************ 00:32:12.106 START TEST rpc_trace_cmd_test 00:32:12.106 ************************************ 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.106 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:32:12.106 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid521817", 00:32:12.106 "tpoint_group_mask": "0x8", 00:32:12.106 "iscsi_conn": { 00:32:12.106 "mask": "0x2", 00:32:12.106 "tpoint_mask": "0x0" 00:32:12.106 }, 00:32:12.106 "scsi": { 00:32:12.107 "mask": "0x4", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "bdev": { 00:32:12.107 "mask": "0x8", 00:32:12.107 "tpoint_mask": "0xffffffffffffffff" 00:32:12.107 }, 00:32:12.107 "nvmf_rdma": { 00:32:12.107 "mask": "0x10", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "nvmf_tcp": { 00:32:12.107 "mask": "0x20", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "ftl": { 00:32:12.107 "mask": "0x40", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "blobfs": { 00:32:12.107 "mask": "0x80", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "dsa": { 00:32:12.107 "mask": "0x200", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "thread": { 00:32:12.107 "mask": "0x400", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "nvme_pcie": { 00:32:12.107 "mask": "0x800", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "iaa": { 00:32:12.107 "mask": "0x1000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "nvme_tcp": { 00:32:12.107 "mask": "0x2000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "bdev_nvme": { 00:32:12.107 "mask": "0x4000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "sock": { 00:32:12.107 "mask": "0x8000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "blob": { 00:32:12.107 "mask": "0x10000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "bdev_raid": { 00:32:12.107 "mask": "0x20000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 }, 00:32:12.107 "scheduler": { 00:32:12.107 "mask": "0x40000", 00:32:12.107 "tpoint_mask": "0x0" 00:32:12.107 } 00:32:12.107 }' 00:32:12.107 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:32:12.107 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:32:12.107 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:32:12.364 00:32:12.364 real 0m0.201s 00:32:12.364 user 0m0.173s 00:32:12.364 sys 0m0.019s 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:12.364 05:27:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 ************************************ 00:32:12.364 END TEST rpc_trace_cmd_test 00:32:12.364 ************************************ 00:32:12.364 05:27:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:32:12.364 05:27:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:32:12.364 05:27:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:32:12.364 05:27:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:12.364 05:27:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.364 05:27:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 ************************************ 00:32:12.364 START TEST rpc_daemon_integrity 00:32:12.364 ************************************ 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:32:12.364 { 00:32:12.364 "name": "Malloc2", 00:32:12.364 "aliases": [ 00:32:12.364 "9e827b86-166d-4d40-9d07-93d211bfa004" 00:32:12.364 ], 00:32:12.364 "product_name": "Malloc disk", 00:32:12.364 "block_size": 512, 00:32:12.364 "num_blocks": 16384, 00:32:12.364 "uuid": "9e827b86-166d-4d40-9d07-93d211bfa004", 00:32:12.364 "assigned_rate_limits": { 00:32:12.364 "rw_ios_per_sec": 0, 00:32:12.364 "rw_mbytes_per_sec": 0, 00:32:12.364 "r_mbytes_per_sec": 0, 00:32:12.364 "w_mbytes_per_sec": 0 00:32:12.364 }, 00:32:12.364 "claimed": false, 00:32:12.364 "zoned": false, 00:32:12.364 "supported_io_types": { 00:32:12.364 "read": true, 00:32:12.364 "write": true, 00:32:12.364 "unmap": true, 00:32:12.364 "flush": true, 00:32:12.364 "reset": true, 00:32:12.364 "nvme_admin": false, 00:32:12.364 "nvme_io": false, 00:32:12.364 "nvme_io_md": false, 00:32:12.364 "write_zeroes": true, 00:32:12.364 "zcopy": true, 00:32:12.364 "get_zone_info": false, 00:32:12.364 "zone_management": false, 00:32:12.364 "zone_append": false, 00:32:12.364 "compare": false, 00:32:12.364 "compare_and_write": false, 00:32:12.364 "abort": true, 00:32:12.364 "seek_hole": false, 00:32:12.364 "seek_data": false, 00:32:12.364 "copy": true, 00:32:12.364 "nvme_iov_md": false 00:32:12.364 }, 00:32:12.364 "memory_domains": [ 00:32:12.364 { 00:32:12.364 "dma_device_id": "system", 00:32:12.364 "dma_device_type": 1 00:32:12.364 }, 00:32:12.364 { 00:32:12.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.364 "dma_device_type": 2 00:32:12.364 } 00:32:12.364 ], 00:32:12.364 "driver_specific": {} 00:32:12.364 } 00:32:12.364 ]' 00:32:12.364 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 [2024-12-09 05:27:06.614210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:32:12.621 [2024-12-09 05:27:06.614245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.621 [2024-12-09 05:27:06.614296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf35fc0 00:32:12.621 [2024-12-09 05:27:06.614313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.621 [2024-12-09 05:27:06.615547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.621 [2024-12-09 05:27:06.615587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:32:12.621 Passthru0 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:32:12.621 { 00:32:12.621 "name": "Malloc2", 00:32:12.621 "aliases": [ 00:32:12.621 "9e827b86-166d-4d40-9d07-93d211bfa004" 00:32:12.621 ], 00:32:12.621 "product_name": "Malloc disk", 00:32:12.621 "block_size": 512, 00:32:12.621 "num_blocks": 16384, 00:32:12.621 "uuid": "9e827b86-166d-4d40-9d07-93d211bfa004", 00:32:12.621 "assigned_rate_limits": { 00:32:12.621 "rw_ios_per_sec": 0, 00:32:12.621 "rw_mbytes_per_sec": 0, 00:32:12.621 "r_mbytes_per_sec": 0, 00:32:12.621 "w_mbytes_per_sec": 0 00:32:12.621 }, 00:32:12.621 "claimed": true, 00:32:12.621 "claim_type": "exclusive_write", 00:32:12.621 "zoned": false, 00:32:12.621 "supported_io_types": { 00:32:12.621 "read": true, 00:32:12.621 "write": true, 00:32:12.621 "unmap": true, 00:32:12.621 "flush": true, 00:32:12.621 "reset": true, 00:32:12.621 "nvme_admin": false, 00:32:12.621 "nvme_io": false, 00:32:12.621 "nvme_io_md": false, 00:32:12.621 "write_zeroes": true, 00:32:12.621 "zcopy": true, 00:32:12.621 "get_zone_info": false, 00:32:12.621 "zone_management": false, 00:32:12.621 "zone_append": false, 00:32:12.621 "compare": false, 00:32:12.621 "compare_and_write": false, 00:32:12.621 "abort": true, 00:32:12.621 "seek_hole": false, 00:32:12.621 "seek_data": false, 00:32:12.621 "copy": true, 00:32:12.621 "nvme_iov_md": false 00:32:12.621 }, 00:32:12.621 "memory_domains": [ 00:32:12.621 { 00:32:12.621 "dma_device_id": "system", 00:32:12.621 "dma_device_type": 1 00:32:12.621 }, 00:32:12.621 { 00:32:12.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.621 "dma_device_type": 2 00:32:12.621 } 00:32:12.621 ], 00:32:12.621 "driver_specific": {} 00:32:12.621 }, 00:32:12.621 { 00:32:12.621 "name": "Passthru0", 00:32:12.621 "aliases": [ 00:32:12.621 "a8e4794c-1e87-5ba1-a038-055646761c71" 00:32:12.621 ], 00:32:12.621 "product_name": "passthru", 00:32:12.621 "block_size": 512, 00:32:12.621 "num_blocks": 16384, 00:32:12.621 "uuid": "a8e4794c-1e87-5ba1-a038-055646761c71", 00:32:12.621 "assigned_rate_limits": { 00:32:12.621 "rw_ios_per_sec": 0, 00:32:12.621 "rw_mbytes_per_sec": 0, 00:32:12.621 "r_mbytes_per_sec": 0, 00:32:12.621 "w_mbytes_per_sec": 0 00:32:12.621 }, 00:32:12.621 "claimed": false, 00:32:12.621 "zoned": false, 00:32:12.621 "supported_io_types": { 00:32:12.621 "read": true, 00:32:12.621 "write": true, 00:32:12.621 "unmap": true, 00:32:12.621 "flush": true, 00:32:12.621 "reset": true, 00:32:12.621 "nvme_admin": false, 00:32:12.621 "nvme_io": false, 00:32:12.621 "nvme_io_md": false, 00:32:12.621 "write_zeroes": true, 00:32:12.621 "zcopy": true, 00:32:12.621 "get_zone_info": false, 00:32:12.621 "zone_management": false, 00:32:12.621 "zone_append": false, 00:32:12.621 "compare": false, 00:32:12.621 "compare_and_write": false, 00:32:12.621 "abort": true, 00:32:12.621 "seek_hole": false, 00:32:12.621 "seek_data": false, 00:32:12.621 "copy": true, 00:32:12.621 "nvme_iov_md": false 00:32:12.621 }, 00:32:12.621 "memory_domains": [ 00:32:12.621 { 00:32:12.621 "dma_device_id": "system", 00:32:12.621 "dma_device_type": 1 00:32:12.621 }, 00:32:12.621 { 00:32:12.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.621 "dma_device_type": 2 00:32:12.621 } 00:32:12.621 ], 00:32:12.621 "driver_specific": { 00:32:12.621 "passthru": { 00:32:12.621 "name": "Passthru0", 00:32:12.621 "base_bdev_name": "Malloc2" 00:32:12.621 } 00:32:12.621 } 00:32:12.621 } 00:32:12.621 ]' 00:32:12.621 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:32:12.622 00:32:12.622 real 0m0.213s 00:32:12.622 user 0m0.135s 00:32:12.622 sys 0m0.022s 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:12.622 05:27:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:12.622 ************************************ 00:32:12.622 END TEST rpc_daemon_integrity 00:32:12.622 ************************************ 00:32:12.622 05:27:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:12.622 05:27:06 rpc -- rpc/rpc.sh@84 -- # killprocess 521817 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 521817 ']' 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@958 -- # kill -0 521817 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@959 -- # uname 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 521817 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 521817' 00:32:12.622 killing process with pid 521817 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@973 -- # kill 521817 00:32:12.622 05:27:06 rpc -- common/autotest_common.sh@978 -- # wait 521817 00:32:13.185 00:32:13.185 real 0m2.008s 00:32:13.185 user 0m2.495s 00:32:13.185 sys 0m0.591s 00:32:13.185 05:27:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.185 05:27:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.185 ************************************ 00:32:13.185 END TEST rpc 00:32:13.185 ************************************ 00:32:13.185 05:27:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:32:13.185 05:27:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.185 05:27:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.185 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:32:13.186 ************************************ 00:32:13.186 START TEST skip_rpc 00:32:13.186 ************************************ 00:32:13.186 05:27:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:32:13.186 * Looking for test storage... 00:32:13.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:32:13.186 05:27:07 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:13.186 05:27:07 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:13.186 05:27:07 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:13.444 05:27:07 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.444 05:27:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:32:13.444 05:27:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.444 05:27:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.444 --rc genhtml_branch_coverage=1 00:32:13.444 --rc genhtml_function_coverage=1 00:32:13.444 --rc genhtml_legend=1 00:32:13.444 --rc geninfo_all_blocks=1 00:32:13.444 --rc geninfo_unexecuted_blocks=1 00:32:13.444 00:32:13.444 ' 00:32:13.444 05:27:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.444 --rc genhtml_branch_coverage=1 00:32:13.444 --rc genhtml_function_coverage=1 00:32:13.444 --rc genhtml_legend=1 00:32:13.444 --rc geninfo_all_blocks=1 00:32:13.444 --rc geninfo_unexecuted_blocks=1 00:32:13.444 00:32:13.444 ' 00:32:13.444 05:27:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:13.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.444 --rc genhtml_branch_coverage=1 00:32:13.444 --rc genhtml_function_coverage=1 00:32:13.444 --rc genhtml_legend=1 00:32:13.444 --rc geninfo_all_blocks=1 00:32:13.444 --rc geninfo_unexecuted_blocks=1 00:32:13.444 00:32:13.444 ' 00:32:13.445 05:27:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:13.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.445 --rc genhtml_branch_coverage=1 00:32:13.445 --rc genhtml_function_coverage=1 00:32:13.445 --rc genhtml_legend=1 00:32:13.445 --rc geninfo_all_blocks=1 00:32:13.445 --rc geninfo_unexecuted_blocks=1 00:32:13.445 00:32:13.445 ' 00:32:13.445 05:27:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:32:13.445 05:27:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:32:13.445 05:27:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:32:13.445 05:27:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.445 05:27:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.445 05:27:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.445 ************************************ 00:32:13.445 START TEST skip_rpc 00:32:13.445 ************************************ 00:32:13.445 05:27:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:32:13.445 05:27:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=522296 00:32:13.445 05:27:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:32:13.445 05:27:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:13.445 05:27:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:32:13.445 [2024-12-09 05:27:07.526544] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:13.445 [2024-12-09 05:27:07.526670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522296 ] 00:32:13.445 [2024-12-09 05:27:07.598679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.445 [2024-12-09 05:27:07.658012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 522296 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 522296 ']' 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 522296 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522296 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522296' 00:32:18.699 killing process with pid 522296 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 522296 00:32:18.699 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 522296 00:32:18.956 00:32:18.956 real 0m5.511s 00:32:18.956 user 0m5.206s 00:32:18.956 sys 0m0.317s 00:32:18.956 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.956 05:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.956 ************************************ 00:32:18.956 END TEST skip_rpc 00:32:18.956 ************************************ 00:32:18.956 05:27:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:32:18.956 05:27:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:18.956 05:27:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.956 05:27:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.956 ************************************ 00:32:18.956 START TEST skip_rpc_with_json 00:32:18.956 ************************************ 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=523339 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 523339 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 523339 ']' 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.956 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:18.957 [2024-12-09 05:27:13.090347] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:18.957 [2024-12-09 05:27:13.090445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523339 ] 00:32:18.957 [2024-12-09 05:27:13.156315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.214 [2024-12-09 05:27:13.215981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:19.472 [2024-12-09 05:27:13.493199] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:32:19.472 request: 00:32:19.472 { 00:32:19.472 "trtype": "tcp", 00:32:19.472 "method": "nvmf_get_transports", 00:32:19.472 "req_id": 1 00:32:19.472 } 00:32:19.472 Got JSON-RPC error response 00:32:19.472 response: 00:32:19.472 { 00:32:19.472 "code": -19, 00:32:19.472 "message": "No such device" 00:32:19.472 } 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:19.472 [2024-12-09 05:27:13.501328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.472 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:32:19.472 { 00:32:19.472 "subsystems": [ 00:32:19.472 { 00:32:19.472 "subsystem": "fsdev", 00:32:19.472 "config": [ 00:32:19.472 { 00:32:19.472 "method": "fsdev_set_opts", 00:32:19.472 "params": { 00:32:19.472 "fsdev_io_pool_size": 65535, 00:32:19.472 "fsdev_io_cache_size": 256 00:32:19.472 } 00:32:19.472 } 00:32:19.472 ] 00:32:19.472 }, 00:32:19.472 { 00:32:19.472 "subsystem": "vfio_user_target", 00:32:19.472 "config": null 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "keyring", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "iobuf", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "iobuf_set_options", 00:32:19.473 "params": { 00:32:19.473 "small_pool_count": 8192, 00:32:19.473 "large_pool_count": 1024, 00:32:19.473 "small_bufsize": 8192, 00:32:19.473 "large_bufsize": 135168, 00:32:19.473 "enable_numa": false 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "sock", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "sock_set_default_impl", 00:32:19.473 "params": { 00:32:19.473 "impl_name": "posix" 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "sock_impl_set_options", 00:32:19.473 "params": { 00:32:19.473 "impl_name": "ssl", 00:32:19.473 "recv_buf_size": 4096, 00:32:19.473 "send_buf_size": 4096, 00:32:19.473 "enable_recv_pipe": true, 00:32:19.473 "enable_quickack": false, 00:32:19.473 "enable_placement_id": 0, 00:32:19.473 "enable_zerocopy_send_server": true, 00:32:19.473 "enable_zerocopy_send_client": false, 00:32:19.473 "zerocopy_threshold": 0, 00:32:19.473 "tls_version": 0, 00:32:19.473 "enable_ktls": false 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "sock_impl_set_options", 00:32:19.473 "params": { 00:32:19.473 "impl_name": "posix", 00:32:19.473 "recv_buf_size": 2097152, 00:32:19.473 "send_buf_size": 2097152, 00:32:19.473 "enable_recv_pipe": true, 00:32:19.473 "enable_quickack": false, 00:32:19.473 "enable_placement_id": 0, 00:32:19.473 "enable_zerocopy_send_server": true, 00:32:19.473 "enable_zerocopy_send_client": false, 00:32:19.473 "zerocopy_threshold": 0, 00:32:19.473 "tls_version": 0, 00:32:19.473 "enable_ktls": false 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "vmd", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "accel", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "accel_set_options", 00:32:19.473 "params": { 00:32:19.473 "small_cache_size": 128, 00:32:19.473 "large_cache_size": 16, 00:32:19.473 "task_count": 2048, 00:32:19.473 "sequence_count": 2048, 00:32:19.473 "buf_count": 2048 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "bdev", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "bdev_set_options", 00:32:19.473 "params": { 00:32:19.473 "bdev_io_pool_size": 65535, 00:32:19.473 "bdev_io_cache_size": 256, 00:32:19.473 "bdev_auto_examine": true, 00:32:19.473 "iobuf_small_cache_size": 128, 00:32:19.473 "iobuf_large_cache_size": 16 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "bdev_raid_set_options", 00:32:19.473 "params": { 00:32:19.473 "process_window_size_kb": 1024, 00:32:19.473 "process_max_bandwidth_mb_sec": 0 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "bdev_iscsi_set_options", 00:32:19.473 "params": { 00:32:19.473 "timeout_sec": 30 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "bdev_nvme_set_options", 00:32:19.473 "params": { 00:32:19.473 "action_on_timeout": "none", 00:32:19.473 "timeout_us": 0, 00:32:19.473 "timeout_admin_us": 0, 00:32:19.473 "keep_alive_timeout_ms": 10000, 00:32:19.473 "arbitration_burst": 0, 00:32:19.473 "low_priority_weight": 0, 00:32:19.473 "medium_priority_weight": 0, 00:32:19.473 "high_priority_weight": 0, 00:32:19.473 "nvme_adminq_poll_period_us": 10000, 00:32:19.473 "nvme_ioq_poll_period_us": 0, 00:32:19.473 "io_queue_requests": 0, 00:32:19.473 "delay_cmd_submit": true, 00:32:19.473 "transport_retry_count": 4, 00:32:19.473 "bdev_retry_count": 3, 00:32:19.473 "transport_ack_timeout": 0, 00:32:19.473 "ctrlr_loss_timeout_sec": 0, 00:32:19.473 "reconnect_delay_sec": 0, 00:32:19.473 "fast_io_fail_timeout_sec": 0, 00:32:19.473 "disable_auto_failback": false, 00:32:19.473 "generate_uuids": false, 00:32:19.473 "transport_tos": 0, 00:32:19.473 "nvme_error_stat": false, 00:32:19.473 "rdma_srq_size": 0, 00:32:19.473 "io_path_stat": false, 00:32:19.473 "allow_accel_sequence": false, 00:32:19.473 "rdma_max_cq_size": 0, 00:32:19.473 "rdma_cm_event_timeout_ms": 0, 00:32:19.473 "dhchap_digests": [ 00:32:19.473 "sha256", 00:32:19.473 "sha384", 00:32:19.473 "sha512" 00:32:19.473 ], 00:32:19.473 "dhchap_dhgroups": [ 00:32:19.473 "null", 00:32:19.473 "ffdhe2048", 00:32:19.473 "ffdhe3072", 00:32:19.473 "ffdhe4096", 00:32:19.473 "ffdhe6144", 00:32:19.473 "ffdhe8192" 00:32:19.473 ] 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "bdev_nvme_set_hotplug", 00:32:19.473 "params": { 00:32:19.473 "period_us": 100000, 00:32:19.473 "enable": false 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "bdev_wait_for_examine" 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "scsi", 00:32:19.473 "config": null 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "scheduler", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "framework_set_scheduler", 00:32:19.473 "params": { 00:32:19.473 "name": "static" 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "vhost_scsi", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "vhost_blk", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "ublk", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "nbd", 00:32:19.473 "config": [] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "nvmf", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "nvmf_set_config", 00:32:19.473 "params": { 00:32:19.473 "discovery_filter": "match_any", 00:32:19.473 "admin_cmd_passthru": { 00:32:19.473 "identify_ctrlr": false 00:32:19.473 }, 00:32:19.473 "dhchap_digests": [ 00:32:19.473 "sha256", 00:32:19.473 "sha384", 00:32:19.473 "sha512" 00:32:19.473 ], 00:32:19.473 "dhchap_dhgroups": [ 00:32:19.473 "null", 00:32:19.473 "ffdhe2048", 00:32:19.473 "ffdhe3072", 00:32:19.473 "ffdhe4096", 00:32:19.473 "ffdhe6144", 00:32:19.473 "ffdhe8192" 00:32:19.473 ] 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "nvmf_set_max_subsystems", 00:32:19.473 "params": { 00:32:19.473 "max_subsystems": 1024 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "nvmf_set_crdt", 00:32:19.473 "params": { 00:32:19.473 "crdt1": 0, 00:32:19.473 "crdt2": 0, 00:32:19.473 "crdt3": 0 00:32:19.473 } 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "method": "nvmf_create_transport", 00:32:19.473 "params": { 00:32:19.473 "trtype": "TCP", 00:32:19.473 "max_queue_depth": 128, 00:32:19.473 "max_io_qpairs_per_ctrlr": 127, 00:32:19.473 "in_capsule_data_size": 4096, 00:32:19.473 "max_io_size": 131072, 00:32:19.473 "io_unit_size": 131072, 00:32:19.473 "max_aq_depth": 128, 00:32:19.473 "num_shared_buffers": 511, 00:32:19.473 "buf_cache_size": 4294967295, 00:32:19.473 "dif_insert_or_strip": false, 00:32:19.473 "zcopy": false, 00:32:19.473 "c2h_success": true, 00:32:19.473 "sock_priority": 0, 00:32:19.473 "abort_timeout_sec": 1, 00:32:19.473 "ack_timeout": 0, 00:32:19.473 "data_wr_pool_size": 0 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 }, 00:32:19.473 { 00:32:19.473 "subsystem": "iscsi", 00:32:19.473 "config": [ 00:32:19.473 { 00:32:19.473 "method": "iscsi_set_options", 00:32:19.473 "params": { 00:32:19.473 "node_base": "iqn.2016-06.io.spdk", 00:32:19.473 "max_sessions": 128, 00:32:19.473 "max_connections_per_session": 2, 00:32:19.473 "max_queue_depth": 64, 00:32:19.473 "default_time2wait": 2, 00:32:19.473 "default_time2retain": 20, 00:32:19.473 "first_burst_length": 8192, 00:32:19.473 "immediate_data": true, 00:32:19.473 "allow_duplicated_isid": false, 00:32:19.473 "error_recovery_level": 0, 00:32:19.473 "nop_timeout": 60, 00:32:19.473 "nop_in_interval": 30, 00:32:19.473 "disable_chap": false, 00:32:19.473 "require_chap": false, 00:32:19.473 "mutual_chap": false, 00:32:19.473 "chap_group": 0, 00:32:19.473 "max_large_datain_per_connection": 64, 00:32:19.473 "max_r2t_per_connection": 4, 00:32:19.473 "pdu_pool_size": 36864, 00:32:19.473 "immediate_data_pool_size": 16384, 00:32:19.473 "data_out_pool_size": 2048 00:32:19.473 } 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 } 00:32:19.473 ] 00:32:19.473 } 00:32:19.473 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 523339 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 523339 ']' 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 523339 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.474 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523339 00:32:19.732 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.732 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.732 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523339' 00:32:19.732 killing process with pid 523339 00:32:19.732 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 523339 00:32:19.732 05:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 523339 00:32:19.990 05:27:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=523481 00:32:19.990 05:27:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:32:19.990 05:27:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 523481 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 523481 ']' 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 523481 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523481 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523481' 00:32:25.247 killing process with pid 523481 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 523481 00:32:25.247 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 523481 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:32:25.505 00:32:25.505 real 0m6.596s 00:32:25.505 user 0m6.239s 00:32:25.505 sys 0m0.671s 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:25.505 ************************************ 00:32:25.505 END TEST skip_rpc_with_json 00:32:25.505 ************************************ 00:32:25.505 05:27:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:32:25.505 05:27:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.505 05:27:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.505 05:27:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.505 ************************************ 00:32:25.505 START TEST skip_rpc_with_delay 00:32:25.505 ************************************ 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:32:25.505 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:25.763 [2024-12-09 05:27:19.738922] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.763 00:32:25.763 real 0m0.075s 00:32:25.763 user 0m0.050s 00:32:25.763 sys 0m0.025s 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.763 05:27:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:32:25.763 ************************************ 00:32:25.763 END TEST skip_rpc_with_delay 00:32:25.763 ************************************ 00:32:25.763 05:27:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:32:25.763 05:27:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:32:25.763 05:27:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:32:25.763 05:27:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.763 05:27:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.763 05:27:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.763 ************************************ 00:32:25.763 START TEST exit_on_failed_rpc_init 00:32:25.763 ************************************ 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=524197 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 524197 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 524197 ']' 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.763 05:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:25.763 [2024-12-09 05:27:19.861768] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:25.763 [2024-12-09 05:27:19.861863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524197 ] 00:32:25.763 [2024-12-09 05:27:19.927547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.021 [2024-12-09 05:27:19.988750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:32:26.279 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:32:26.279 [2024-12-09 05:27:20.332439] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:26.279 [2024-12-09 05:27:20.332523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524320 ] 00:32:26.279 [2024-12-09 05:27:20.401550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.279 [2024-12-09 05:27:20.462280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.279 [2024-12-09 05:27:20.462399] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:26.279 [2024-12-09 05:27:20.462420] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:26.279 [2024-12-09 05:27:20.462432] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 524197 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 524197 ']' 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 524197 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524197 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524197' 00:32:26.537 killing process with pid 524197 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 524197 00:32:26.537 05:27:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 524197 00:32:27.101 00:32:27.101 real 0m1.275s 00:32:27.101 user 0m1.422s 00:32:27.101 sys 0m0.457s 00:32:27.101 05:27:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.101 05:27:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:27.101 ************************************ 00:32:27.101 END TEST exit_on_failed_rpc_init 00:32:27.101 ************************************ 00:32:27.101 05:27:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:32:27.101 00:32:27.101 real 0m13.813s 00:32:27.101 user 0m13.099s 00:32:27.101 sys 0m1.662s 00:32:27.101 05:27:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.101 05:27:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.101 ************************************ 00:32:27.101 END TEST skip_rpc 00:32:27.101 ************************************ 00:32:27.101 05:27:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:32:27.101 05:27:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.101 05:27:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.101 05:27:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.101 ************************************ 00:32:27.101 START TEST rpc_client 00:32:27.101 ************************************ 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:32:27.101 * Looking for test storage... 00:32:27.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.101 05:27:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.101 --rc genhtml_branch_coverage=1 00:32:27.101 --rc genhtml_function_coverage=1 00:32:27.101 --rc genhtml_legend=1 00:32:27.101 --rc geninfo_all_blocks=1 00:32:27.101 --rc geninfo_unexecuted_blocks=1 00:32:27.101 00:32:27.101 ' 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.101 --rc genhtml_branch_coverage=1 00:32:27.101 --rc genhtml_function_coverage=1 00:32:27.101 --rc genhtml_legend=1 00:32:27.101 --rc geninfo_all_blocks=1 00:32:27.101 --rc geninfo_unexecuted_blocks=1 00:32:27.101 00:32:27.101 ' 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.101 --rc genhtml_branch_coverage=1 00:32:27.101 --rc genhtml_function_coverage=1 00:32:27.101 --rc genhtml_legend=1 00:32:27.101 --rc geninfo_all_blocks=1 00:32:27.101 --rc geninfo_unexecuted_blocks=1 00:32:27.101 00:32:27.101 ' 00:32:27.101 05:27:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.101 --rc genhtml_branch_coverage=1 00:32:27.101 --rc genhtml_function_coverage=1 00:32:27.101 --rc genhtml_legend=1 00:32:27.101 --rc geninfo_all_blocks=1 00:32:27.101 --rc geninfo_unexecuted_blocks=1 00:32:27.101 00:32:27.101 ' 00:32:27.101 05:27:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:32:27.101 OK 00:32:27.102 05:27:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:32:27.102 00:32:27.102 real 0m0.158s 00:32:27.102 user 0m0.113s 00:32:27.102 sys 0m0.055s 00:32:27.102 05:27:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.102 05:27:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:32:27.102 ************************************ 00:32:27.102 END TEST rpc_client 00:32:27.102 ************************************ 00:32:27.360 05:27:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:32:27.360 05:27:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.360 05:27:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.360 05:27:21 -- common/autotest_common.sh@10 -- # set +x 00:32:27.360 ************************************ 00:32:27.360 START TEST json_config 00:32:27.360 ************************************ 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.360 05:27:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.360 05:27:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.360 05:27:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.360 05:27:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.360 05:27:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.360 05:27:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:32:27.360 05:27:21 json_config -- scripts/common.sh@345 -- # : 1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.360 05:27:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.360 05:27:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@353 -- # local d=1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.360 05:27:21 json_config -- scripts/common.sh@355 -- # echo 1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.360 05:27:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@353 -- # local d=2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.360 05:27:21 json_config -- scripts/common.sh@355 -- # echo 2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.360 05:27:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.360 05:27:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.360 05:27:21 json_config -- scripts/common.sh@368 -- # return 0 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.360 --rc genhtml_branch_coverage=1 00:32:27.360 --rc genhtml_function_coverage=1 00:32:27.360 --rc genhtml_legend=1 00:32:27.360 --rc geninfo_all_blocks=1 00:32:27.360 --rc geninfo_unexecuted_blocks=1 00:32:27.360 00:32:27.360 ' 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.360 --rc genhtml_branch_coverage=1 00:32:27.360 --rc genhtml_function_coverage=1 00:32:27.360 --rc genhtml_legend=1 00:32:27.360 --rc geninfo_all_blocks=1 00:32:27.360 --rc geninfo_unexecuted_blocks=1 00:32:27.360 00:32:27.360 ' 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.360 --rc genhtml_branch_coverage=1 00:32:27.360 --rc genhtml_function_coverage=1 00:32:27.360 --rc genhtml_legend=1 00:32:27.360 --rc geninfo_all_blocks=1 00:32:27.360 --rc geninfo_unexecuted_blocks=1 00:32:27.360 00:32:27.360 ' 00:32:27.360 05:27:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.360 --rc genhtml_branch_coverage=1 00:32:27.360 --rc genhtml_function_coverage=1 00:32:27.360 --rc genhtml_legend=1 00:32:27.360 --rc geninfo_all_blocks=1 00:32:27.360 --rc geninfo_unexecuted_blocks=1 00:32:27.360 00:32:27.360 ' 00:32:27.360 05:27:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.360 05:27:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.361 05:27:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.361 05:27:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.361 05:27:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.361 05:27:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.361 05:27:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.361 05:27:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.361 05:27:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.361 05:27:21 json_config -- paths/export.sh@5 -- # export PATH 00:32:27.361 05:27:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@51 -- # : 0 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.361 05:27:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:32:27.361 INFO: JSON configuration test init 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:27.361 05:27:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:32:27.361 05:27:21 json_config -- json_config/common.sh@9 -- # local app=target 00:32:27.361 05:27:21 json_config -- json_config/common.sh@10 -- # shift 00:32:27.361 05:27:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:32:27.361 05:27:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:32:27.361 05:27:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:32:27.361 05:27:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:27.361 05:27:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:27.361 05:27:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=524586 00:32:27.361 05:27:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:32:27.361 05:27:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:32:27.361 Waiting for target to run... 00:32:27.361 05:27:21 json_config -- json_config/common.sh@25 -- # waitforlisten 524586 /var/tmp/spdk_tgt.sock 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 524586 ']' 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:32:27.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.361 05:27:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:27.361 [2024-12-09 05:27:21.569874] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:27.361 [2024-12-09 05:27:21.569971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524586 ] 00:32:27.929 [2024-12-09 05:27:22.081840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.929 [2024-12-09 05:27:22.133490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:32:28.495 05:27:22 json_config -- json_config/common.sh@26 -- # echo '' 00:32:28.495 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.495 05:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:32:28.495 05:27:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:32:28.495 05:27:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:32:31.778 05:27:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.778 05:27:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:32:31.778 05:27:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:32:31.778 05:27:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@54 -- # sort 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:32:32.036 05:27:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.036 05:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:32:32.036 05:27:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.036 05:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:32:32.036 05:27:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:32:32.036 05:27:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:32:32.294 MallocForNvmf0 00:32:32.294 05:27:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:32:32.294 05:27:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:32:32.552 MallocForNvmf1 00:32:32.552 05:27:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:32:32.552 05:27:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:32:32.811 [2024-12-09 05:27:26.855278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.811 05:27:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.811 05:27:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.069 05:27:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:32:33.069 05:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:32:33.327 05:27:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:32:33.327 05:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:32:33.585 05:27:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:32:33.585 05:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:32:33.843 [2024-12-09 05:27:27.930764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:33.843 05:27:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:32:33.843 05:27:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.843 05:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:33.843 05:27:27 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:32:33.843 05:27:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.843 05:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:33.843 05:27:27 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:32:33.843 05:27:27 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:32:33.843 05:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:32:34.102 MallocBdevForConfigChangeCheck 00:32:34.102 05:27:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:32:34.102 05:27:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.102 05:27:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:34.102 05:27:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:32:34.102 05:27:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:32:34.667 05:27:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:32:34.667 INFO: shutting down applications... 00:32:34.667 05:27:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:32:34.667 05:27:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:32:34.667 05:27:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:32:34.667 05:27:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:32:36.563 Calling clear_iscsi_subsystem 00:32:36.563 Calling clear_nvmf_subsystem 00:32:36.563 Calling clear_nbd_subsystem 00:32:36.563 Calling clear_ublk_subsystem 00:32:36.563 Calling clear_vhost_blk_subsystem 00:32:36.563 Calling clear_vhost_scsi_subsystem 00:32:36.563 Calling clear_bdev_subsystem 00:32:36.563 05:27:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:32:36.563 05:27:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:32:36.563 05:27:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:32:36.563 05:27:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:32:36.564 05:27:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:32:36.564 05:27:30 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:32:36.564 05:27:30 json_config -- json_config/json_config.sh@352 -- # break 00:32:36.564 05:27:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:32:36.564 05:27:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:32:36.564 05:27:30 json_config -- json_config/common.sh@31 -- # local app=target 00:32:36.564 05:27:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:32:36.564 05:27:30 json_config -- json_config/common.sh@35 -- # [[ -n 524586 ]] 00:32:36.564 05:27:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 524586 00:32:36.564 05:27:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:32:36.564 05:27:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:36.564 05:27:30 json_config -- json_config/common.sh@41 -- # kill -0 524586 00:32:36.564 05:27:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:32:37.128 05:27:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:32:37.128 05:27:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:37.128 05:27:31 json_config -- json_config/common.sh@41 -- # kill -0 524586 00:32:37.128 05:27:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:32:37.128 05:27:31 json_config -- json_config/common.sh@43 -- # break 00:32:37.128 05:27:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:32:37.128 05:27:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:32:37.128 SPDK target shutdown done 00:32:37.128 05:27:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:32:37.128 INFO: relaunching applications... 00:32:37.128 05:27:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:37.128 05:27:31 json_config -- json_config/common.sh@9 -- # local app=target 00:32:37.128 05:27:31 json_config -- json_config/common.sh@10 -- # shift 00:32:37.128 05:27:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:32:37.128 05:27:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:32:37.128 05:27:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:32:37.128 05:27:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:37.128 05:27:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:37.128 05:27:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=525795 00:32:37.128 05:27:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:37.128 05:27:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:32:37.128 Waiting for target to run... 00:32:37.128 05:27:31 json_config -- json_config/common.sh@25 -- # waitforlisten 525795 /var/tmp/spdk_tgt.sock 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 525795 ']' 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:32:37.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.128 05:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:37.128 [2024-12-09 05:27:31.329593] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:37.128 [2024-12-09 05:27:31.329687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525795 ] 00:32:37.694 [2024-12-09 05:27:31.872919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.952 [2024-12-09 05:27:31.926280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.245 [2024-12-09 05:27:34.976044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.245 [2024-12-09 05:27:35.008538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:41.245 05:27:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.245 05:27:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:32:41.245 05:27:35 json_config -- json_config/common.sh@26 -- # echo '' 00:32:41.245 00:32:41.245 05:27:35 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:32:41.245 05:27:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:32:41.245 INFO: Checking if target configuration is the same... 00:32:41.245 05:27:35 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:41.245 05:27:35 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:32:41.245 05:27:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:32:41.245 + '[' 2 -ne 2 ']' 00:32:41.245 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:32:41.245 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:32:41.245 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:41.245 +++ basename /dev/fd/62 00:32:41.245 ++ mktemp /tmp/62.XXX 00:32:41.245 + tmp_file_1=/tmp/62.kSz 00:32:41.245 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:41.245 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:32:41.245 + tmp_file_2=/tmp/spdk_tgt_config.json.l3X 00:32:41.245 + ret=0 00:32:41.245 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:32:41.245 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:32:41.504 + diff -u /tmp/62.kSz /tmp/spdk_tgt_config.json.l3X 00:32:41.504 + echo 'INFO: JSON config files are the same' 00:32:41.504 INFO: JSON config files are the same 00:32:41.504 + rm /tmp/62.kSz /tmp/spdk_tgt_config.json.l3X 00:32:41.504 + exit 0 00:32:41.504 05:27:35 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:32:41.504 05:27:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:32:41.504 INFO: changing configuration and checking if this can be detected... 00:32:41.504 05:27:35 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:32:41.504 05:27:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:32:41.762 05:27:35 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:41.762 05:27:35 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:32:41.762 05:27:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:32:41.762 + '[' 2 -ne 2 ']' 00:32:41.762 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:32:41.762 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:32:41.762 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:41.762 +++ basename /dev/fd/62 00:32:41.762 ++ mktemp /tmp/62.XXX 00:32:41.762 + tmp_file_1=/tmp/62.ssr 00:32:41.762 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:41.762 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:32:41.762 + tmp_file_2=/tmp/spdk_tgt_config.json.8Sf 00:32:41.762 + ret=0 00:32:41.762 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:32:42.020 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:32:42.020 + diff -u /tmp/62.ssr /tmp/spdk_tgt_config.json.8Sf 00:32:42.020 + ret=1 00:32:42.020 + echo '=== Start of file: /tmp/62.ssr ===' 00:32:42.020 + cat /tmp/62.ssr 00:32:42.020 + echo '=== End of file: /tmp/62.ssr ===' 00:32:42.020 + echo '' 00:32:42.020 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8Sf ===' 00:32:42.020 + cat /tmp/spdk_tgt_config.json.8Sf 00:32:42.020 + echo '=== End of file: /tmp/spdk_tgt_config.json.8Sf ===' 00:32:42.020 + echo '' 00:32:42.020 + rm /tmp/62.ssr /tmp/spdk_tgt_config.json.8Sf 00:32:42.020 + exit 1 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:32:42.020 INFO: configuration change detected. 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:32:42.020 05:27:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.020 05:27:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@324 -- # [[ -n 525795 ]] 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:32:42.020 05:27:36 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:32:42.020 05:27:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.020 05:27:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@200 -- # uname -s 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:42.278 05:27:36 json_config -- json_config/json_config.sh@330 -- # killprocess 525795 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@954 -- # '[' -z 525795 ']' 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@958 -- # kill -0 525795 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@959 -- # uname 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525795 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525795' 00:32:42.278 killing process with pid 525795 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@973 -- # kill 525795 00:32:42.278 05:27:36 json_config -- common/autotest_common.sh@978 -- # wait 525795 00:32:44.176 05:27:37 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:32:44.176 05:27:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:32:44.176 05:27:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.176 05:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:44.176 05:27:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:32:44.176 05:27:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:32:44.176 INFO: Success 00:32:44.176 00:32:44.176 real 0m16.620s 00:32:44.176 user 0m18.073s 00:32:44.176 sys 0m2.810s 00:32:44.176 05:27:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.176 05:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:44.176 ************************************ 00:32:44.176 END TEST json_config 00:32:44.176 ************************************ 00:32:44.176 05:27:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:32:44.176 05:27:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:44.176 05:27:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.176 05:27:38 -- common/autotest_common.sh@10 -- # set +x 00:32:44.176 ************************************ 00:32:44.176 START TEST json_config_extra_key 00:32:44.176 ************************************ 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.176 05:27:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.176 --rc genhtml_branch_coverage=1 00:32:44.176 --rc genhtml_function_coverage=1 00:32:44.176 --rc genhtml_legend=1 00:32:44.176 --rc geninfo_all_blocks=1 00:32:44.176 --rc geninfo_unexecuted_blocks=1 00:32:44.176 00:32:44.176 ' 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.176 --rc genhtml_branch_coverage=1 00:32:44.176 --rc genhtml_function_coverage=1 00:32:44.176 --rc genhtml_legend=1 00:32:44.176 --rc geninfo_all_blocks=1 00:32:44.176 --rc geninfo_unexecuted_blocks=1 00:32:44.176 00:32:44.176 ' 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.176 --rc genhtml_branch_coverage=1 00:32:44.176 --rc genhtml_function_coverage=1 00:32:44.176 --rc genhtml_legend=1 00:32:44.176 --rc geninfo_all_blocks=1 00:32:44.176 --rc geninfo_unexecuted_blocks=1 00:32:44.176 00:32:44.176 ' 00:32:44.176 05:27:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.176 --rc genhtml_branch_coverage=1 00:32:44.176 --rc genhtml_function_coverage=1 00:32:44.176 --rc genhtml_legend=1 00:32:44.176 --rc geninfo_all_blocks=1 00:32:44.176 --rc geninfo_unexecuted_blocks=1 00:32:44.176 00:32:44.176 ' 00:32:44.176 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.176 05:27:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.177 05:27:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.177 05:27:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.177 05:27:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.177 05:27:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.177 05:27:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.177 05:27:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.177 05:27:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.177 05:27:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:32:44.177 05:27:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.177 05:27:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:32:44.177 INFO: launching applications... 00:32:44.177 05:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=526829 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:32:44.177 Waiting for target to run... 00:32:44.177 05:27:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 526829 /var/tmp/spdk_tgt.sock 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 526829 ']' 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:32:44.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.177 05:27:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:32:44.177 [2024-12-09 05:27:38.234542] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:44.177 [2024-12-09 05:27:38.234641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526829 ] 00:32:44.435 [2024-12-09 05:27:38.569869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.435 [2024-12-09 05:27:38.612181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.000 05:27:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.000 05:27:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:32:45.000 05:27:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:32:45.000 00:32:45.000 05:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:32:45.000 INFO: shutting down applications... 00:32:45.000 05:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:32:45.000 05:27:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:32:45.000 05:27:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:32:45.000 05:27:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 526829 ]] 00:32:45.000 05:27:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 526829 00:32:45.001 05:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:32:45.001 05:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:45.001 05:27:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 526829 00:32:45.001 05:27:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 526829 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:32:45.567 05:27:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:32:45.567 SPDK target shutdown done 00:32:45.567 05:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:32:45.567 Success 00:32:45.567 00:32:45.567 real 0m1.674s 00:32:45.567 user 0m1.717s 00:32:45.567 sys 0m0.437s 00:32:45.567 05:27:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.567 05:27:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:32:45.567 ************************************ 00:32:45.567 END TEST json_config_extra_key 00:32:45.567 ************************************ 00:32:45.567 05:27:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:32:45.567 05:27:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:45.567 05:27:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.567 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:32:45.567 ************************************ 00:32:45.567 START TEST alias_rpc 00:32:45.567 ************************************ 00:32:45.567 05:27:39 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:32:45.826 * Looking for test storage... 00:32:45.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.826 05:27:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.826 05:27:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.826 --rc genhtml_branch_coverage=1 00:32:45.826 --rc genhtml_function_coverage=1 00:32:45.826 --rc genhtml_legend=1 00:32:45.827 --rc geninfo_all_blocks=1 00:32:45.827 --rc geninfo_unexecuted_blocks=1 00:32:45.827 00:32:45.827 ' 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.827 --rc genhtml_branch_coverage=1 00:32:45.827 --rc genhtml_function_coverage=1 00:32:45.827 --rc genhtml_legend=1 00:32:45.827 --rc geninfo_all_blocks=1 00:32:45.827 --rc geninfo_unexecuted_blocks=1 00:32:45.827 00:32:45.827 ' 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.827 --rc genhtml_branch_coverage=1 00:32:45.827 --rc genhtml_function_coverage=1 00:32:45.827 --rc genhtml_legend=1 00:32:45.827 --rc geninfo_all_blocks=1 00:32:45.827 --rc geninfo_unexecuted_blocks=1 00:32:45.827 00:32:45.827 ' 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.827 --rc genhtml_branch_coverage=1 00:32:45.827 --rc genhtml_function_coverage=1 00:32:45.827 --rc genhtml_legend=1 00:32:45.827 --rc geninfo_all_blocks=1 00:32:45.827 --rc geninfo_unexecuted_blocks=1 00:32:45.827 00:32:45.827 ' 00:32:45.827 05:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:45.827 05:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=527030 00:32:45.827 05:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:45.827 05:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 527030 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 527030 ']' 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.827 05:27:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:45.827 [2024-12-09 05:27:39.957730] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:45.827 [2024-12-09 05:27:39.957824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527030 ] 00:32:45.827 [2024-12-09 05:27:40.025295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.084 [2024-12-09 05:27:40.087983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.341 05:27:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.341 05:27:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:46.341 05:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:32:46.597 05:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 527030 00:32:46.597 05:27:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 527030 ']' 00:32:46.597 05:27:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 527030 00:32:46.597 05:27:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527030 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527030' 00:32:46.598 killing process with pid 527030 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 527030 00:32:46.598 05:27:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 527030 00:32:47.160 00:32:47.160 real 0m1.383s 00:32:47.160 user 0m1.487s 00:32:47.160 sys 0m0.456s 00:32:47.160 05:27:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.160 05:27:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:47.160 ************************************ 00:32:47.160 END TEST alias_rpc 00:32:47.160 ************************************ 00:32:47.160 05:27:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:32:47.160 05:27:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:32:47.160 05:27:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:47.160 05:27:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.160 05:27:41 -- common/autotest_common.sh@10 -- # set +x 00:32:47.160 ************************************ 00:32:47.160 START TEST spdkcli_tcp 00:32:47.160 ************************************ 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:32:47.160 * Looking for test storage... 00:32:47.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.160 05:27:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.160 --rc genhtml_branch_coverage=1 00:32:47.160 --rc genhtml_function_coverage=1 00:32:47.160 --rc genhtml_legend=1 00:32:47.160 --rc geninfo_all_blocks=1 00:32:47.160 --rc geninfo_unexecuted_blocks=1 00:32:47.160 00:32:47.160 ' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.160 --rc genhtml_branch_coverage=1 00:32:47.160 --rc genhtml_function_coverage=1 00:32:47.160 --rc genhtml_legend=1 00:32:47.160 --rc geninfo_all_blocks=1 00:32:47.160 --rc geninfo_unexecuted_blocks=1 00:32:47.160 00:32:47.160 ' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.160 --rc genhtml_branch_coverage=1 00:32:47.160 --rc genhtml_function_coverage=1 00:32:47.160 --rc genhtml_legend=1 00:32:47.160 --rc geninfo_all_blocks=1 00:32:47.160 --rc geninfo_unexecuted_blocks=1 00:32:47.160 00:32:47.160 ' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:47.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.160 --rc genhtml_branch_coverage=1 00:32:47.160 --rc genhtml_function_coverage=1 00:32:47.160 --rc genhtml_legend=1 00:32:47.160 --rc geninfo_all_blocks=1 00:32:47.160 --rc geninfo_unexecuted_blocks=1 00:32:47.160 00:32:47.160 ' 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=527230 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:32:47.160 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 527230 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 527230 ']' 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.160 05:27:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.161 [2024-12-09 05:27:41.376480] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:47.161 [2024-12-09 05:27:41.376564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527230 ] 00:32:47.420 [2024-12-09 05:27:41.444832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.420 [2024-12-09 05:27:41.504768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.420 [2024-12-09 05:27:41.504773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.702 05:27:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.702 05:27:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:47.702 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=527355 00:32:47.702 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:32:47.702 05:27:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:32:47.983 [ 00:32:47.983 "bdev_malloc_delete", 00:32:47.983 "bdev_malloc_create", 00:32:47.983 "bdev_null_resize", 00:32:47.983 "bdev_null_delete", 00:32:47.983 "bdev_null_create", 00:32:47.983 "bdev_nvme_cuse_unregister", 00:32:47.983 "bdev_nvme_cuse_register", 00:32:47.983 "bdev_opal_new_user", 00:32:47.983 "bdev_opal_set_lock_state", 00:32:47.983 "bdev_opal_delete", 00:32:47.983 "bdev_opal_get_info", 00:32:47.983 "bdev_opal_create", 00:32:47.983 "bdev_nvme_opal_revert", 00:32:47.983 "bdev_nvme_opal_init", 00:32:47.983 "bdev_nvme_send_cmd", 00:32:47.983 "bdev_nvme_set_keys", 00:32:47.983 "bdev_nvme_get_path_iostat", 00:32:47.983 "bdev_nvme_get_mdns_discovery_info", 00:32:47.983 "bdev_nvme_stop_mdns_discovery", 00:32:47.983 "bdev_nvme_start_mdns_discovery", 00:32:47.983 "bdev_nvme_set_multipath_policy", 00:32:47.983 "bdev_nvme_set_preferred_path", 00:32:47.983 "bdev_nvme_get_io_paths", 00:32:47.983 "bdev_nvme_remove_error_injection", 00:32:47.983 "bdev_nvme_add_error_injection", 00:32:47.983 "bdev_nvme_get_discovery_info", 00:32:47.983 "bdev_nvme_stop_discovery", 00:32:47.983 "bdev_nvme_start_discovery", 00:32:47.983 "bdev_nvme_get_controller_health_info", 00:32:47.983 "bdev_nvme_disable_controller", 00:32:47.983 "bdev_nvme_enable_controller", 00:32:47.983 "bdev_nvme_reset_controller", 00:32:47.983 "bdev_nvme_get_transport_statistics", 00:32:47.983 "bdev_nvme_apply_firmware", 00:32:47.983 "bdev_nvme_detach_controller", 00:32:47.983 "bdev_nvme_get_controllers", 00:32:47.983 "bdev_nvme_attach_controller", 00:32:47.983 "bdev_nvme_set_hotplug", 00:32:47.983 "bdev_nvme_set_options", 00:32:47.983 "bdev_passthru_delete", 00:32:47.983 "bdev_passthru_create", 00:32:47.983 "bdev_lvol_set_parent_bdev", 00:32:47.983 "bdev_lvol_set_parent", 00:32:47.983 "bdev_lvol_check_shallow_copy", 00:32:47.983 "bdev_lvol_start_shallow_copy", 00:32:47.983 "bdev_lvol_grow_lvstore", 00:32:47.983 "bdev_lvol_get_lvols", 00:32:47.983 "bdev_lvol_get_lvstores", 00:32:47.983 "bdev_lvol_delete", 00:32:47.983 "bdev_lvol_set_read_only", 00:32:47.983 "bdev_lvol_resize", 00:32:47.983 "bdev_lvol_decouple_parent", 00:32:47.983 "bdev_lvol_inflate", 00:32:47.983 "bdev_lvol_rename", 00:32:47.983 "bdev_lvol_clone_bdev", 00:32:47.983 "bdev_lvol_clone", 00:32:47.983 "bdev_lvol_snapshot", 00:32:47.983 "bdev_lvol_create", 00:32:47.983 "bdev_lvol_delete_lvstore", 00:32:47.983 "bdev_lvol_rename_lvstore", 00:32:47.983 "bdev_lvol_create_lvstore", 00:32:47.983 "bdev_raid_set_options", 00:32:47.983 "bdev_raid_remove_base_bdev", 00:32:47.983 "bdev_raid_add_base_bdev", 00:32:47.983 "bdev_raid_delete", 00:32:47.983 "bdev_raid_create", 00:32:47.983 "bdev_raid_get_bdevs", 00:32:47.983 "bdev_error_inject_error", 00:32:47.983 "bdev_error_delete", 00:32:47.983 "bdev_error_create", 00:32:47.983 "bdev_split_delete", 00:32:47.983 "bdev_split_create", 00:32:47.983 "bdev_delay_delete", 00:32:47.983 "bdev_delay_create", 00:32:47.983 "bdev_delay_update_latency", 00:32:47.983 "bdev_zone_block_delete", 00:32:47.983 "bdev_zone_block_create", 00:32:47.983 "blobfs_create", 00:32:47.983 "blobfs_detect", 00:32:47.983 "blobfs_set_cache_size", 00:32:47.983 "bdev_aio_delete", 00:32:47.983 "bdev_aio_rescan", 00:32:47.983 "bdev_aio_create", 00:32:47.983 "bdev_ftl_set_property", 00:32:47.983 "bdev_ftl_get_properties", 00:32:47.983 "bdev_ftl_get_stats", 00:32:47.983 "bdev_ftl_unmap", 00:32:47.983 "bdev_ftl_unload", 00:32:47.983 "bdev_ftl_delete", 00:32:47.983 "bdev_ftl_load", 00:32:47.983 "bdev_ftl_create", 00:32:47.983 "bdev_virtio_attach_controller", 00:32:47.983 "bdev_virtio_scsi_get_devices", 00:32:47.983 "bdev_virtio_detach_controller", 00:32:47.983 "bdev_virtio_blk_set_hotplug", 00:32:47.983 "bdev_iscsi_delete", 00:32:47.983 "bdev_iscsi_create", 00:32:47.983 "bdev_iscsi_set_options", 00:32:47.983 "accel_error_inject_error", 00:32:47.984 "ioat_scan_accel_module", 00:32:47.984 "dsa_scan_accel_module", 00:32:47.984 "iaa_scan_accel_module", 00:32:47.984 "vfu_virtio_create_fs_endpoint", 00:32:47.984 "vfu_virtio_create_scsi_endpoint", 00:32:47.984 "vfu_virtio_scsi_remove_target", 00:32:47.984 "vfu_virtio_scsi_add_target", 00:32:47.984 "vfu_virtio_create_blk_endpoint", 00:32:47.984 "vfu_virtio_delete_endpoint", 00:32:47.984 "keyring_file_remove_key", 00:32:47.984 "keyring_file_add_key", 00:32:47.984 "keyring_linux_set_options", 00:32:47.984 "fsdev_aio_delete", 00:32:47.984 "fsdev_aio_create", 00:32:47.984 "iscsi_get_histogram", 00:32:47.984 "iscsi_enable_histogram", 00:32:47.984 "iscsi_set_options", 00:32:47.984 "iscsi_get_auth_groups", 00:32:47.984 "iscsi_auth_group_remove_secret", 00:32:47.984 "iscsi_auth_group_add_secret", 00:32:47.984 "iscsi_delete_auth_group", 00:32:47.984 "iscsi_create_auth_group", 00:32:47.984 "iscsi_set_discovery_auth", 00:32:47.984 "iscsi_get_options", 00:32:47.984 "iscsi_target_node_request_logout", 00:32:47.984 "iscsi_target_node_set_redirect", 00:32:47.984 "iscsi_target_node_set_auth", 00:32:47.984 "iscsi_target_node_add_lun", 00:32:47.984 "iscsi_get_stats", 00:32:47.984 "iscsi_get_connections", 00:32:47.984 "iscsi_portal_group_set_auth", 00:32:47.984 "iscsi_start_portal_group", 00:32:47.984 "iscsi_delete_portal_group", 00:32:47.984 "iscsi_create_portal_group", 00:32:47.984 "iscsi_get_portal_groups", 00:32:47.984 "iscsi_delete_target_node", 00:32:47.984 "iscsi_target_node_remove_pg_ig_maps", 00:32:47.984 "iscsi_target_node_add_pg_ig_maps", 00:32:47.984 "iscsi_create_target_node", 00:32:47.984 "iscsi_get_target_nodes", 00:32:47.984 "iscsi_delete_initiator_group", 00:32:47.984 "iscsi_initiator_group_remove_initiators", 00:32:47.984 "iscsi_initiator_group_add_initiators", 00:32:47.984 "iscsi_create_initiator_group", 00:32:47.984 "iscsi_get_initiator_groups", 00:32:47.984 "nvmf_set_crdt", 00:32:47.984 "nvmf_set_config", 00:32:47.984 "nvmf_set_max_subsystems", 00:32:47.984 "nvmf_stop_mdns_prr", 00:32:47.984 "nvmf_publish_mdns_prr", 00:32:47.984 "nvmf_subsystem_get_listeners", 00:32:47.984 "nvmf_subsystem_get_qpairs", 00:32:47.984 "nvmf_subsystem_get_controllers", 00:32:47.984 "nvmf_get_stats", 00:32:47.984 "nvmf_get_transports", 00:32:47.984 "nvmf_create_transport", 00:32:47.984 "nvmf_get_targets", 00:32:47.984 "nvmf_delete_target", 00:32:47.984 "nvmf_create_target", 00:32:47.984 "nvmf_subsystem_allow_any_host", 00:32:47.984 "nvmf_subsystem_set_keys", 00:32:47.984 "nvmf_subsystem_remove_host", 00:32:47.984 "nvmf_subsystem_add_host", 00:32:47.984 "nvmf_ns_remove_host", 00:32:47.984 "nvmf_ns_add_host", 00:32:47.984 "nvmf_subsystem_remove_ns", 00:32:47.984 "nvmf_subsystem_set_ns_ana_group", 00:32:47.984 "nvmf_subsystem_add_ns", 00:32:47.984 "nvmf_subsystem_listener_set_ana_state", 00:32:47.984 "nvmf_discovery_get_referrals", 00:32:47.984 "nvmf_discovery_remove_referral", 00:32:47.984 "nvmf_discovery_add_referral", 00:32:47.984 "nvmf_subsystem_remove_listener", 00:32:47.984 "nvmf_subsystem_add_listener", 00:32:47.984 "nvmf_delete_subsystem", 00:32:47.984 "nvmf_create_subsystem", 00:32:47.984 "nvmf_get_subsystems", 00:32:47.984 "env_dpdk_get_mem_stats", 00:32:47.984 "nbd_get_disks", 00:32:47.984 "nbd_stop_disk", 00:32:47.984 "nbd_start_disk", 00:32:47.984 "ublk_recover_disk", 00:32:47.984 "ublk_get_disks", 00:32:47.984 "ublk_stop_disk", 00:32:47.984 "ublk_start_disk", 00:32:47.984 "ublk_destroy_target", 00:32:47.984 "ublk_create_target", 00:32:47.984 "virtio_blk_create_transport", 00:32:47.984 "virtio_blk_get_transports", 00:32:47.984 "vhost_controller_set_coalescing", 00:32:47.984 "vhost_get_controllers", 00:32:47.984 "vhost_delete_controller", 00:32:47.984 "vhost_create_blk_controller", 00:32:47.984 "vhost_scsi_controller_remove_target", 00:32:47.984 "vhost_scsi_controller_add_target", 00:32:47.984 "vhost_start_scsi_controller", 00:32:47.984 "vhost_create_scsi_controller", 00:32:47.984 "thread_set_cpumask", 00:32:47.984 "scheduler_set_options", 00:32:47.984 "framework_get_governor", 00:32:47.984 "framework_get_scheduler", 00:32:47.984 "framework_set_scheduler", 00:32:47.984 "framework_get_reactors", 00:32:47.984 "thread_get_io_channels", 00:32:47.984 "thread_get_pollers", 00:32:47.984 "thread_get_stats", 00:32:47.984 "framework_monitor_context_switch", 00:32:47.984 "spdk_kill_instance", 00:32:47.984 "log_enable_timestamps", 00:32:47.984 "log_get_flags", 00:32:47.984 "log_clear_flag", 00:32:47.984 "log_set_flag", 00:32:47.984 "log_get_level", 00:32:47.984 "log_set_level", 00:32:47.984 "log_get_print_level", 00:32:47.984 "log_set_print_level", 00:32:47.984 "framework_enable_cpumask_locks", 00:32:47.984 "framework_disable_cpumask_locks", 00:32:47.984 "framework_wait_init", 00:32:47.984 "framework_start_init", 00:32:47.984 "scsi_get_devices", 00:32:47.984 "bdev_get_histogram", 00:32:47.984 "bdev_enable_histogram", 00:32:47.984 "bdev_set_qos_limit", 00:32:47.984 "bdev_set_qd_sampling_period", 00:32:47.984 "bdev_get_bdevs", 00:32:47.984 "bdev_reset_iostat", 00:32:47.984 "bdev_get_iostat", 00:32:47.984 "bdev_examine", 00:32:47.984 "bdev_wait_for_examine", 00:32:47.984 "bdev_set_options", 00:32:47.984 "accel_get_stats", 00:32:47.984 "accel_set_options", 00:32:47.984 "accel_set_driver", 00:32:47.984 "accel_crypto_key_destroy", 00:32:47.984 "accel_crypto_keys_get", 00:32:47.984 "accel_crypto_key_create", 00:32:47.984 "accel_assign_opc", 00:32:47.984 "accel_get_module_info", 00:32:47.984 "accel_get_opc_assignments", 00:32:47.984 "vmd_rescan", 00:32:47.984 "vmd_remove_device", 00:32:47.984 "vmd_enable", 00:32:47.984 "sock_get_default_impl", 00:32:47.984 "sock_set_default_impl", 00:32:47.984 "sock_impl_set_options", 00:32:47.984 "sock_impl_get_options", 00:32:47.984 "iobuf_get_stats", 00:32:47.984 "iobuf_set_options", 00:32:47.984 "keyring_get_keys", 00:32:47.984 "vfu_tgt_set_base_path", 00:32:47.984 "framework_get_pci_devices", 00:32:47.984 "framework_get_config", 00:32:47.984 "framework_get_subsystems", 00:32:47.984 "fsdev_set_opts", 00:32:47.984 "fsdev_get_opts", 00:32:47.984 "trace_get_info", 00:32:47.984 "trace_get_tpoint_group_mask", 00:32:47.984 "trace_disable_tpoint_group", 00:32:47.984 "trace_enable_tpoint_group", 00:32:47.984 "trace_clear_tpoint_mask", 00:32:47.984 "trace_set_tpoint_mask", 00:32:47.984 "notify_get_notifications", 00:32:47.984 "notify_get_types", 00:32:47.984 "spdk_get_version", 00:32:47.984 "rpc_get_methods" 00:32:47.984 ] 00:32:47.984 05:27:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.984 05:27:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:47.984 05:27:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 527230 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 527230 ']' 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 527230 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527230 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527230' 00:32:47.984 killing process with pid 527230 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 527230 00:32:47.984 05:27:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 527230 00:32:48.599 00:32:48.599 real 0m1.382s 00:32:48.599 user 0m2.450s 00:32:48.599 sys 0m0.464s 00:32:48.599 05:27:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.599 05:27:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.599 ************************************ 00:32:48.599 END TEST spdkcli_tcp 00:32:48.599 ************************************ 00:32:48.599 05:27:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:32:48.599 05:27:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:48.599 05:27:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.599 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:32:48.599 ************************************ 00:32:48.599 START TEST dpdk_mem_utility 00:32:48.599 ************************************ 00:32:48.599 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:32:48.599 * Looking for test storage... 00:32:48.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.600 05:27:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.600 --rc genhtml_branch_coverage=1 00:32:48.600 --rc genhtml_function_coverage=1 00:32:48.600 --rc genhtml_legend=1 00:32:48.600 --rc geninfo_all_blocks=1 00:32:48.600 --rc geninfo_unexecuted_blocks=1 00:32:48.600 00:32:48.600 ' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.600 --rc genhtml_branch_coverage=1 00:32:48.600 --rc genhtml_function_coverage=1 00:32:48.600 --rc genhtml_legend=1 00:32:48.600 --rc geninfo_all_blocks=1 00:32:48.600 --rc geninfo_unexecuted_blocks=1 00:32:48.600 00:32:48.600 ' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.600 --rc genhtml_branch_coverage=1 00:32:48.600 --rc genhtml_function_coverage=1 00:32:48.600 --rc genhtml_legend=1 00:32:48.600 --rc geninfo_all_blocks=1 00:32:48.600 --rc geninfo_unexecuted_blocks=1 00:32:48.600 00:32:48.600 ' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:48.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.600 --rc genhtml_branch_coverage=1 00:32:48.600 --rc genhtml_function_coverage=1 00:32:48.600 --rc genhtml_legend=1 00:32:48.600 --rc geninfo_all_blocks=1 00:32:48.600 --rc geninfo_unexecuted_blocks=1 00:32:48.600 00:32:48.600 ' 00:32:48.600 05:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:32:48.600 05:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=527560 00:32:48.600 05:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:48.600 05:27:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 527560 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 527560 ']' 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.600 05:27:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:48.600 [2024-12-09 05:27:42.816675] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:48.600 [2024-12-09 05:27:42.816760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527560 ] 00:32:48.858 [2024-12-09 05:27:42.885470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.858 [2024-12-09 05:27:42.944156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.116 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.116 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:32:49.116 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:32:49.116 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:32:49.116 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.116 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:49.116 { 00:32:49.116 "filename": "/tmp/spdk_mem_dump.txt" 00:32:49.116 } 00:32:49.116 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.116 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:32:49.116 DPDK memory size 818.000000 MiB in 1 heap(s) 00:32:49.116 1 heaps totaling size 818.000000 MiB 00:32:49.116 size: 818.000000 MiB heap id: 0 00:32:49.116 end heaps---------- 00:32:49.116 9 mempools totaling size 603.782043 MiB 00:32:49.116 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:32:49.116 size: 158.602051 MiB name: PDU_data_out_Pool 00:32:49.116 size: 100.555481 MiB name: bdev_io_527560 00:32:49.116 size: 50.003479 MiB name: msgpool_527560 00:32:49.116 size: 36.509338 MiB name: fsdev_io_527560 00:32:49.116 size: 21.763794 MiB name: PDU_Pool 00:32:49.116 size: 19.513306 MiB name: SCSI_TASK_Pool 00:32:49.116 size: 4.133484 MiB name: evtpool_527560 00:32:49.116 size: 0.026123 MiB name: Session_Pool 00:32:49.116 end mempools------- 00:32:49.116 6 memzones totaling size 4.142822 MiB 00:32:49.116 size: 1.000366 MiB name: RG_ring_0_527560 00:32:49.116 size: 1.000366 MiB name: RG_ring_1_527560 00:32:49.116 size: 1.000366 MiB name: RG_ring_4_527560 00:32:49.116 size: 1.000366 MiB name: RG_ring_5_527560 00:32:49.116 size: 0.125366 MiB name: RG_ring_2_527560 00:32:49.116 size: 0.015991 MiB name: RG_ring_3_527560 00:32:49.116 end memzones------- 00:32:49.116 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:32:49.116 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:32:49.116 list of free elements. size: 10.852478 MiB 00:32:49.116 element at address: 0x200019200000 with size: 0.999878 MiB 00:32:49.116 element at address: 0x200019400000 with size: 0.999878 MiB 00:32:49.116 element at address: 0x200000400000 with size: 0.998535 MiB 00:32:49.116 element at address: 0x200032000000 with size: 0.994446 MiB 00:32:49.116 element at address: 0x200006400000 with size: 0.959839 MiB 00:32:49.116 element at address: 0x200012c00000 with size: 0.944275 MiB 00:32:49.116 element at address: 0x200019600000 with size: 0.936584 MiB 00:32:49.116 element at address: 0x200000200000 with size: 0.717346 MiB 00:32:49.116 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:32:49.116 element at address: 0x200000c00000 with size: 0.495422 MiB 00:32:49.116 element at address: 0x20000a600000 with size: 0.490723 MiB 00:32:49.116 element at address: 0x200019800000 with size: 0.485657 MiB 00:32:49.116 element at address: 0x200003e00000 with size: 0.481934 MiB 00:32:49.116 element at address: 0x200028200000 with size: 0.410034 MiB 00:32:49.116 element at address: 0x200000800000 with size: 0.355042 MiB 00:32:49.116 list of standard malloc elements. size: 199.218628 MiB 00:32:49.116 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:32:49.116 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:32:49.116 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:32:49.116 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:32:49.116 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:32:49.116 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:32:49.116 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:32:49.116 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:32:49.116 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:32:49.116 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000085b040 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000085f300 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000087f680 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200000cff000 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200003efb980 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:32:49.116 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200028268f80 with size: 0.000183 MiB 00:32:49.116 element at address: 0x200028269040 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:32:49.116 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:32:49.116 list of memzone associated elements. size: 607.928894 MiB 00:32:49.116 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:32:49.116 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:32:49.116 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:32:49.116 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:32:49.116 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:32:49.116 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_527560_0 00:32:49.116 element at address: 0x200000dff380 with size: 48.003052 MiB 00:32:49.116 associated memzone info: size: 48.002930 MiB name: MP_msgpool_527560_0 00:32:49.116 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:32:49.116 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_527560_0 00:32:49.116 element at address: 0x2000199be940 with size: 20.255554 MiB 00:32:49.116 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:32:49.116 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:32:49.116 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:32:49.116 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:32:49.116 associated memzone info: size: 3.000122 MiB name: MP_evtpool_527560_0 00:32:49.116 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:32:49.116 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_527560 00:32:49.116 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:32:49.116 associated memzone info: size: 1.007996 MiB name: MP_evtpool_527560 00:32:49.116 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:32:49.117 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:32:49.117 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:32:49.117 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:32:49.117 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:32:49.117 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:32:49.117 element at address: 0x200003efba40 with size: 1.008118 MiB 00:32:49.117 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:32:49.117 element at address: 0x200000cff180 with size: 1.000488 MiB 00:32:49.117 associated memzone info: size: 1.000366 MiB name: RG_ring_0_527560 00:32:49.117 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:32:49.117 associated memzone info: size: 1.000366 MiB name: RG_ring_1_527560 00:32:49.117 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:32:49.117 associated memzone info: size: 1.000366 MiB name: RG_ring_4_527560 00:32:49.117 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:32:49.117 associated memzone info: size: 1.000366 MiB name: RG_ring_5_527560 00:32:49.117 element at address: 0x20000087f740 with size: 0.500488 MiB 00:32:49.117 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_527560 00:32:49.117 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:32:49.117 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_527560 00:32:49.117 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:32:49.117 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:32:49.117 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:32:49.117 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:32:49.117 element at address: 0x20001987c540 with size: 0.250488 MiB 00:32:49.117 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:32:49.117 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:32:49.117 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_527560 00:32:49.117 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:32:49.117 associated memzone info: size: 0.125366 MiB name: RG_ring_2_527560 00:32:49.117 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:32:49.117 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:32:49.117 element at address: 0x200028269100 with size: 0.023743 MiB 00:32:49.117 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:32:49.117 element at address: 0x20000085b100 with size: 0.016113 MiB 00:32:49.117 associated memzone info: size: 0.015991 MiB name: RG_ring_3_527560 00:32:49.117 element at address: 0x20002826f240 with size: 0.002441 MiB 00:32:49.117 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:32:49.117 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:32:49.117 associated memzone info: size: 0.000183 MiB name: MP_msgpool_527560 00:32:49.117 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:32:49.117 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_527560 00:32:49.117 element at address: 0x20000085af00 with size: 0.000305 MiB 00:32:49.117 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_527560 00:32:49.117 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:32:49.117 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:32:49.117 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:32:49.117 05:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 527560 00:32:49.117 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 527560 ']' 00:32:49.117 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 527560 00:32:49.117 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:32:49.117 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.117 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527560 00:32:49.373 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.373 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.373 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527560' 00:32:49.373 killing process with pid 527560 00:32:49.373 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 527560 00:32:49.373 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 527560 00:32:49.630 00:32:49.630 real 0m1.200s 00:32:49.630 user 0m1.185s 00:32:49.630 sys 0m0.416s 00:32:49.630 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.630 05:27:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:32:49.630 ************************************ 00:32:49.630 END TEST dpdk_mem_utility 00:32:49.630 ************************************ 00:32:49.630 05:27:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:32:49.630 05:27:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:49.630 05:27:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.630 05:27:43 -- common/autotest_common.sh@10 -- # set +x 00:32:49.888 ************************************ 00:32:49.888 START TEST event 00:32:49.888 ************************************ 00:32:49.888 05:27:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:32:49.888 * Looking for test storage... 00:32:49.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:32:49.888 05:27:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.888 05:27:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.888 05:27:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.888 05:27:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.888 05:27:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.888 05:27:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.888 05:27:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.888 05:27:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.888 05:27:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.888 05:27:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.888 05:27:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.888 05:27:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.888 05:27:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.888 05:27:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.888 05:27:44 event -- scripts/common.sh@344 -- # case "$op" in 00:32:49.888 05:27:44 event -- scripts/common.sh@345 -- # : 1 00:32:49.888 05:27:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.888 05:27:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.888 05:27:44 event -- scripts/common.sh@365 -- # decimal 1 00:32:49.888 05:27:44 event -- scripts/common.sh@353 -- # local d=1 00:32:49.888 05:27:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.888 05:27:44 event -- scripts/common.sh@355 -- # echo 1 00:32:49.888 05:27:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.888 05:27:44 event -- scripts/common.sh@366 -- # decimal 2 00:32:49.888 05:27:44 event -- scripts/common.sh@353 -- # local d=2 00:32:49.888 05:27:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.888 05:27:44 event -- scripts/common.sh@355 -- # echo 2 00:32:49.888 05:27:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.888 05:27:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.888 05:27:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.888 05:27:44 event -- scripts/common.sh@368 -- # return 0 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.888 --rc genhtml_branch_coverage=1 00:32:49.888 --rc genhtml_function_coverage=1 00:32:49.888 --rc genhtml_legend=1 00:32:49.888 --rc geninfo_all_blocks=1 00:32:49.888 --rc geninfo_unexecuted_blocks=1 00:32:49.888 00:32:49.888 ' 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.888 --rc genhtml_branch_coverage=1 00:32:49.888 --rc genhtml_function_coverage=1 00:32:49.888 --rc genhtml_legend=1 00:32:49.888 --rc geninfo_all_blocks=1 00:32:49.888 --rc geninfo_unexecuted_blocks=1 00:32:49.888 00:32:49.888 ' 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.888 --rc genhtml_branch_coverage=1 00:32:49.888 --rc genhtml_function_coverage=1 00:32:49.888 --rc genhtml_legend=1 00:32:49.888 --rc geninfo_all_blocks=1 00:32:49.888 --rc geninfo_unexecuted_blocks=1 00:32:49.888 00:32:49.888 ' 00:32:49.888 05:27:44 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.888 --rc genhtml_branch_coverage=1 00:32:49.888 --rc genhtml_function_coverage=1 00:32:49.888 --rc genhtml_legend=1 00:32:49.888 --rc geninfo_all_blocks=1 00:32:49.888 --rc geninfo_unexecuted_blocks=1 00:32:49.888 00:32:49.888 ' 00:32:49.888 05:27:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:32:49.888 05:27:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:32:49.888 05:27:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:32:49.889 05:27:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:32:49.889 05:27:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.889 05:27:44 event -- common/autotest_common.sh@10 -- # set +x 00:32:49.889 ************************************ 00:32:49.889 START TEST event_perf 00:32:49.889 ************************************ 00:32:49.889 05:27:44 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:32:49.889 Running I/O for 1 seconds...[2024-12-09 05:27:44.056640] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:49.889 [2024-12-09 05:27:44.056704] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527762 ] 00:32:50.147 [2024-12-09 05:27:44.125669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.147 [2024-12-09 05:27:44.188603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.147 [2024-12-09 05:27:44.188669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.147 [2024-12-09 05:27:44.188730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.147 [2024-12-09 05:27:44.188733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.079 Running I/O for 1 seconds... 00:32:51.079 lcore 0: 233653 00:32:51.079 lcore 1: 233651 00:32:51.079 lcore 2: 233653 00:32:51.079 lcore 3: 233652 00:32:51.079 done. 00:32:51.079 00:32:51.079 real 0m1.249s 00:32:51.079 user 0m4.165s 00:32:51.079 sys 0m0.078s 00:32:51.079 05:27:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.079 05:27:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:32:51.079 ************************************ 00:32:51.079 END TEST event_perf 00:32:51.079 ************************************ 00:32:51.339 05:27:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:32:51.339 05:27:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:51.339 05:27:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.339 05:27:45 event -- common/autotest_common.sh@10 -- # set +x 00:32:51.339 ************************************ 00:32:51.339 START TEST event_reactor 00:32:51.339 ************************************ 00:32:51.339 05:27:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:32:51.339 [2024-12-09 05:27:45.358365] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:51.339 [2024-12-09 05:27:45.358427] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527919 ] 00:32:51.339 [2024-12-09 05:27:45.424289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.339 [2024-12-09 05:27:45.478761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.712 test_start 00:32:52.712 oneshot 00:32:52.712 tick 100 00:32:52.712 tick 100 00:32:52.712 tick 250 00:32:52.712 tick 100 00:32:52.712 tick 100 00:32:52.712 tick 100 00:32:52.712 tick 250 00:32:52.712 tick 500 00:32:52.712 tick 100 00:32:52.712 tick 100 00:32:52.712 tick 250 00:32:52.712 tick 100 00:32:52.712 tick 100 00:32:52.712 test_end 00:32:52.712 00:32:52.712 real 0m1.234s 00:32:52.712 user 0m1.167s 00:32:52.712 sys 0m0.062s 00:32:52.712 05:27:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.712 05:27:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:32:52.712 ************************************ 00:32:52.712 END TEST event_reactor 00:32:52.712 ************************************ 00:32:52.712 05:27:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:32:52.712 05:27:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:52.712 05:27:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.712 05:27:46 event -- common/autotest_common.sh@10 -- # set +x 00:32:52.712 ************************************ 00:32:52.712 START TEST event_reactor_perf 00:32:52.712 ************************************ 00:32:52.712 05:27:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:32:52.712 [2024-12-09 05:27:46.642676] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:52.712 [2024-12-09 05:27:46.642746] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528075 ] 00:32:52.712 [2024-12-09 05:27:46.709530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.712 [2024-12-09 05:27:46.763149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.646 test_start 00:32:53.646 test_end 00:32:53.646 Performance: 447249 events per second 00:32:53.646 00:32:53.646 real 0m1.233s 00:32:53.646 user 0m1.161s 00:32:53.646 sys 0m0.067s 00:32:53.646 05:27:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.646 05:27:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:32:53.646 ************************************ 00:32:53.646 END TEST event_reactor_perf 00:32:53.646 ************************************ 00:32:53.906 05:27:47 event -- event/event.sh@49 -- # uname -s 00:32:53.906 05:27:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:32:53.906 05:27:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:32:53.906 05:27:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:53.906 05:27:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.906 05:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:32:53.906 ************************************ 00:32:53.906 START TEST event_scheduler 00:32:53.906 ************************************ 00:32:53.906 05:27:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:32:53.906 * Looking for test storage... 00:32:53.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:32:53.906 05:27:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.906 05:27:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.906 05:27:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.906 05:27:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:53.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.906 --rc genhtml_branch_coverage=1 00:32:53.906 --rc genhtml_function_coverage=1 00:32:53.906 --rc genhtml_legend=1 00:32:53.906 --rc geninfo_all_blocks=1 00:32:53.906 --rc geninfo_unexecuted_blocks=1 00:32:53.906 00:32:53.906 ' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:53.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.906 --rc genhtml_branch_coverage=1 00:32:53.906 --rc genhtml_function_coverage=1 00:32:53.906 --rc genhtml_legend=1 00:32:53.906 --rc geninfo_all_blocks=1 00:32:53.906 --rc geninfo_unexecuted_blocks=1 00:32:53.906 00:32:53.906 ' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:53.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.906 --rc genhtml_branch_coverage=1 00:32:53.906 --rc genhtml_function_coverage=1 00:32:53.906 --rc genhtml_legend=1 00:32:53.906 --rc geninfo_all_blocks=1 00:32:53.906 --rc geninfo_unexecuted_blocks=1 00:32:53.906 00:32:53.906 ' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:53.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.906 --rc genhtml_branch_coverage=1 00:32:53.906 --rc genhtml_function_coverage=1 00:32:53.906 --rc genhtml_legend=1 00:32:53.906 --rc geninfo_all_blocks=1 00:32:53.906 --rc geninfo_unexecuted_blocks=1 00:32:53.906 00:32:53.906 ' 00:32:53.906 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:32:53.906 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=528275 00:32:53.906 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:32:53.906 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:32:53.906 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 528275 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 528275 ']' 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.906 05:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:53.906 [2024-12-09 05:27:48.106745] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:53.906 [2024-12-09 05:27:48.106824] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528275 ] 00:32:54.165 [2024-12-09 05:27:48.175701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.165 [2024-12-09 05:27:48.240532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.165 [2024-12-09 05:27:48.240557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.165 [2024-12-09 05:27:48.240616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.165 [2024-12-09 05:27:48.240621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:32:54.165 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:54.165 [2024-12-09 05:27:48.357636] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:32:54.165 [2024-12-09 05:27:48.357663] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:32:54.165 [2024-12-09 05:27:48.357680] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:32:54.165 [2024-12-09 05:27:48.357692] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:32:54.165 [2024-12-09 05:27:48.357701] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.165 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.165 05:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 [2024-12-09 05:27:48.460597] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:32:54.422 05:27:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:32:54.422 05:27:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:54.422 05:27:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 ************************************ 00:32:54.422 START TEST scheduler_create_thread 00:32:54.422 ************************************ 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 2 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 3 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 4 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 5 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 6 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 7 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 8 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 9 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 10 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.422 05:27:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.985 05:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.985 00:32:54.985 real 0m0.593s 00:32:54.985 user 0m0.010s 00:32:54.985 sys 0m0.004s 00:32:54.985 05:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.985 05:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:32:54.985 ************************************ 00:32:54.985 END TEST scheduler_create_thread 00:32:54.985 ************************************ 00:32:54.985 05:27:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:32:54.985 05:27:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 528275 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 528275 ']' 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 528275 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528275 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528275' 00:32:54.985 killing process with pid 528275 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 528275 00:32:54.985 05:27:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 528275 00:32:55.572 [2024-12-09 05:27:49.564816] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:32:55.829 00:32:55.829 real 0m1.909s 00:32:55.829 user 0m2.600s 00:32:55.829 sys 0m0.383s 00:32:55.829 05:27:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:55.829 05:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:32:55.829 ************************************ 00:32:55.829 END TEST event_scheduler 00:32:55.829 ************************************ 00:32:55.829 05:27:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:32:55.829 05:27:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:32:55.829 05:27:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:55.829 05:27:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:55.829 05:27:49 event -- common/autotest_common.sh@10 -- # set +x 00:32:55.829 ************************************ 00:32:55.829 START TEST app_repeat 00:32:55.829 ************************************ 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=528579 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 528579' 00:32:55.829 Process app_repeat pid: 528579 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:32:55.829 spdk_app_start Round 0 00:32:55.829 05:27:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 528579 /var/tmp/spdk-nbd.sock 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 528579 ']' 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:55.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.829 05:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:55.829 [2024-12-09 05:27:49.908611] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:32:55.829 [2024-12-09 05:27:49.908676] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528579 ] 00:32:55.829 [2024-12-09 05:27:49.974946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:55.829 [2024-12-09 05:27:50.039784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.829 [2024-12-09 05:27:50.039787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.087 05:27:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.087 05:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:32:56.087 05:27:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:56.344 Malloc0 00:32:56.344 05:27:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:56.602 Malloc1 00:32:56.602 05:27:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:56.602 05:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:32:56.860 /dev/nbd0 00:32:56.860 05:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:56.860 05:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:56.860 05:27:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:57.117 1+0 records in 00:32:57.117 1+0 records out 00:32:57.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209302 s, 19.6 MB/s 00:32:57.117 05:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:32:57.117 05:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:57.118 05:27:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:32:57.118 05:27:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:57.118 05:27:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:57.118 05:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:57.118 05:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:57.118 05:27:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:32:57.375 /dev/nbd1 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:57.375 1+0 records in 00:32:57.375 1+0 records out 00:32:57.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021885 s, 18.7 MB/s 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:57.375 05:27:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:57.375 05:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:57.632 { 00:32:57.632 "nbd_device": "/dev/nbd0", 00:32:57.632 "bdev_name": "Malloc0" 00:32:57.632 }, 00:32:57.632 { 00:32:57.632 "nbd_device": "/dev/nbd1", 00:32:57.632 "bdev_name": "Malloc1" 00:32:57.632 } 00:32:57.632 ]' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:57.632 { 00:32:57.632 "nbd_device": "/dev/nbd0", 00:32:57.632 "bdev_name": "Malloc0" 00:32:57.632 }, 00:32:57.632 { 00:32:57.632 "nbd_device": "/dev/nbd1", 00:32:57.632 "bdev_name": "Malloc1" 00:32:57.632 } 00:32:57.632 ]' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:57.632 /dev/nbd1' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:57.632 /dev/nbd1' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:32:57.632 256+0 records in 00:32:57.632 256+0 records out 00:32:57.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500141 s, 210 MB/s 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:57.632 256+0 records in 00:32:57.632 256+0 records out 00:32:57.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202211 s, 51.9 MB/s 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:57.632 256+0 records in 00:32:57.632 256+0 records out 00:32:57.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243525 s, 43.1 MB/s 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:32:57.632 05:27:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:57.633 05:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:57.890 05:27:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:58.453 05:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:58.710 05:27:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:32:58.710 05:27:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:32:58.967 05:27:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:32:59.225 [2024-12-09 05:27:53.261362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.225 [2024-12-09 05:27:53.315980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.225 [2024-12-09 05:27:53.315980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.225 [2024-12-09 05:27:53.368589] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:32:59.225 [2024-12-09 05:27:53.368657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:02.502 05:27:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:33:02.502 05:27:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:33:02.502 spdk_app_start Round 1 00:33:02.502 05:27:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 528579 /var/tmp/spdk-nbd.sock 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 528579 ']' 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:02.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.502 05:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:02.502 05:27:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:02.502 Malloc0 00:33:02.502 05:27:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:02.760 Malloc1 00:33:02.760 05:27:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:02.760 05:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:33:03.018 /dev/nbd0 00:33:03.018 05:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:03.018 05:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:03.018 1+0 records in 00:33:03.018 1+0 records out 00:33:03.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219329 s, 18.7 MB/s 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:03.018 05:27:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:03.018 05:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:03.018 05:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:03.018 05:27:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:33:03.276 /dev/nbd1 00:33:03.276 05:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:03.276 05:27:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:03.276 05:27:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:03.276 05:27:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:03.276 05:27:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:03.277 05:27:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:03.277 1+0 records in 00:33:03.277 1+0 records out 00:33:03.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231096 s, 17.7 MB/s 00:33:03.535 05:27:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:03.535 05:27:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:03.535 05:27:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:03.535 05:27:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:03.535 05:27:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:03.535 05:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:03.535 05:27:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:03.535 05:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:03.535 05:27:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.535 05:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:03.793 05:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:03.793 { 00:33:03.793 "nbd_device": "/dev/nbd0", 00:33:03.793 "bdev_name": "Malloc0" 00:33:03.793 }, 00:33:03.793 { 00:33:03.793 "nbd_device": "/dev/nbd1", 00:33:03.793 "bdev_name": "Malloc1" 00:33:03.793 } 00:33:03.793 ]' 00:33:03.793 05:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:03.793 { 00:33:03.793 "nbd_device": "/dev/nbd0", 00:33:03.793 "bdev_name": "Malloc0" 00:33:03.793 }, 00:33:03.793 { 00:33:03.793 "nbd_device": "/dev/nbd1", 00:33:03.793 "bdev_name": "Malloc1" 00:33:03.793 } 00:33:03.793 ]' 00:33:03.793 05:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:03.794 /dev/nbd1' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:03.794 /dev/nbd1' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:03.794 256+0 records in 00:33:03.794 256+0 records out 00:33:03.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049731 s, 211 MB/s 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:03.794 256+0 records in 00:33:03.794 256+0 records out 00:33:03.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209903 s, 50.0 MB/s 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:03.794 256+0 records in 00:33:03.794 256+0 records out 00:33:03.794 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237173 s, 44.2 MB/s 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:03.794 05:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.051 05:27:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:04.308 05:27:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:04.567 05:27:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:04.567 05:27:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:05.133 05:27:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:05.133 [2024-12-09 05:27:59.271556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:05.133 [2024-12-09 05:27:59.324762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.133 [2024-12-09 05:27:59.324762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.392 [2024-12-09 05:27:59.383369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:05.392 [2024-12-09 05:27:59.383439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:07.916 05:28:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:33:07.916 05:28:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:33:07.916 spdk_app_start Round 2 00:33:07.916 05:28:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 528579 /var/tmp/spdk-nbd.sock 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 528579 ']' 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:07.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.916 05:28:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:08.173 05:28:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.173 05:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:08.173 05:28:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:08.430 Malloc0 00:33:08.430 05:28:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:08.687 Malloc1 00:33:08.687 05:28:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:08.687 05:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:33:09.251 /dev/nbd0 00:33:09.251 05:28:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:09.251 05:28:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:09.251 1+0 records in 00:33:09.251 1+0 records out 00:33:09.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234425 s, 17.5 MB/s 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:09.251 05:28:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:09.251 05:28:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.251 05:28:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:09.251 05:28:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:33:09.508 /dev/nbd1 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:09.508 1+0 records in 00:33:09.508 1+0 records out 00:33:09.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162941 s, 25.1 MB/s 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:09.508 05:28:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.508 05:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:09.765 { 00:33:09.765 "nbd_device": "/dev/nbd0", 00:33:09.765 "bdev_name": "Malloc0" 00:33:09.765 }, 00:33:09.765 { 00:33:09.765 "nbd_device": "/dev/nbd1", 00:33:09.765 "bdev_name": "Malloc1" 00:33:09.765 } 00:33:09.765 ]' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:09.765 { 00:33:09.765 "nbd_device": "/dev/nbd0", 00:33:09.765 "bdev_name": "Malloc0" 00:33:09.765 }, 00:33:09.765 { 00:33:09.765 "nbd_device": "/dev/nbd1", 00:33:09.765 "bdev_name": "Malloc1" 00:33:09.765 } 00:33:09.765 ]' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:09.765 /dev/nbd1' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:09.765 /dev/nbd1' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:09.765 256+0 records in 00:33:09.765 256+0 records out 00:33:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519548 s, 202 MB/s 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:09.765 256+0 records in 00:33:09.765 256+0 records out 00:33:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202602 s, 51.8 MB/s 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:09.765 256+0 records in 00:33:09.765 256+0 records out 00:33:09.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220217 s, 47.6 MB/s 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:09.765 05:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.022 05:28:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:10.279 05:28:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:10.536 05:28:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.536 05:28:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:10.536 05:28:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:10.536 05:28:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:10.793 05:28:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:10.793 05:28:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:11.051 05:28:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:11.307 [2024-12-09 05:28:05.320927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:11.307 [2024-12-09 05:28:05.374825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.307 [2024-12-09 05:28:05.374829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.307 [2024-12-09 05:28:05.425513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:11.307 [2024-12-09 05:28:05.425594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:14.588 05:28:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 528579 /var/tmp/spdk-nbd.sock 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 528579 ']' 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:14.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:14.588 05:28:08 event.app_repeat -- event/event.sh@39 -- # killprocess 528579 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 528579 ']' 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 528579 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528579 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528579' 00:33:14.588 killing process with pid 528579 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 528579 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 528579 00:33:14.588 spdk_app_start is called in Round 0. 00:33:14.588 Shutdown signal received, stop current app iteration 00:33:14.588 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 00:33:14.588 spdk_app_start is called in Round 1. 00:33:14.588 Shutdown signal received, stop current app iteration 00:33:14.588 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 00:33:14.588 spdk_app_start is called in Round 2. 00:33:14.588 Shutdown signal received, stop current app iteration 00:33:14.588 Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 reinitialization... 00:33:14.588 spdk_app_start is called in Round 3. 00:33:14.588 Shutdown signal received, stop current app iteration 00:33:14.588 05:28:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:33:14.588 05:28:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:33:14.588 00:33:14.588 real 0m18.723s 00:33:14.588 user 0m41.302s 00:33:14.588 sys 0m3.280s 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.588 05:28:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:14.588 ************************************ 00:33:14.588 END TEST app_repeat 00:33:14.588 ************************************ 00:33:14.588 05:28:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:33:14.588 05:28:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:33:14.588 05:28:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.588 05:28:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.588 05:28:08 event -- common/autotest_common.sh@10 -- # set +x 00:33:14.588 ************************************ 00:33:14.588 START TEST cpu_locks 00:33:14.588 ************************************ 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:33:14.588 * Looking for test storage... 00:33:14.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:14.588 05:28:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.588 --rc genhtml_branch_coverage=1 00:33:14.588 --rc genhtml_function_coverage=1 00:33:14.588 --rc genhtml_legend=1 00:33:14.588 --rc geninfo_all_blocks=1 00:33:14.588 --rc geninfo_unexecuted_blocks=1 00:33:14.588 00:33:14.588 ' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.588 --rc genhtml_branch_coverage=1 00:33:14.588 --rc genhtml_function_coverage=1 00:33:14.588 --rc genhtml_legend=1 00:33:14.588 --rc geninfo_all_blocks=1 00:33:14.588 --rc geninfo_unexecuted_blocks=1 00:33:14.588 00:33:14.588 ' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.588 --rc genhtml_branch_coverage=1 00:33:14.588 --rc genhtml_function_coverage=1 00:33:14.588 --rc genhtml_legend=1 00:33:14.588 --rc geninfo_all_blocks=1 00:33:14.588 --rc geninfo_unexecuted_blocks=1 00:33:14.588 00:33:14.588 ' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:14.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.588 --rc genhtml_branch_coverage=1 00:33:14.588 --rc genhtml_function_coverage=1 00:33:14.588 --rc genhtml_legend=1 00:33:14.588 --rc geninfo_all_blocks=1 00:33:14.588 --rc geninfo_unexecuted_blocks=1 00:33:14.588 00:33:14.588 ' 00:33:14.588 05:28:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:33:14.588 05:28:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:33:14.588 05:28:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:33:14.588 05:28:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.588 05:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 ************************************ 00:33:14.847 START TEST default_locks 00:33:14.847 ************************************ 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=531070 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 531070 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 531070 ']' 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.847 05:28:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:14.847 [2024-12-09 05:28:08.879327] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:14.847 [2024-12-09 05:28:08.879422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531070 ] 00:33:14.847 [2024-12-09 05:28:08.944351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.847 [2024-12-09 05:28:09.000520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.105 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.105 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:33:15.105 05:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 531070 00:33:15.105 05:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 531070 00:33:15.105 05:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:15.671 lslocks: write error 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 531070 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 531070 ']' 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 531070 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531070 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531070' 00:33:15.671 killing process with pid 531070 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 531070 00:33:15.671 05:28:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 531070 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 531070 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 531070 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 531070 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 531070 ']' 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:15.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (531070) - No such process 00:33:15.930 ERROR: process (pid: 531070) is no longer running 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:15.930 00:33:15.930 real 0m1.286s 00:33:15.930 user 0m1.248s 00:33:15.930 sys 0m0.513s 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.930 05:28:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:15.930 ************************************ 00:33:15.930 END TEST default_locks 00:33:15.930 ************************************ 00:33:15.930 05:28:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:33:15.930 05:28:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:15.930 05:28:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.930 05:28:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:16.189 ************************************ 00:33:16.189 START TEST default_locks_via_rpc 00:33:16.189 ************************************ 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=531239 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 531239 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 531239 ']' 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.189 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:16.189 [2024-12-09 05:28:10.222233] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:16.189 [2024-12-09 05:28:10.222343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531239 ] 00:33:16.189 [2024-12-09 05:28:10.288466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.189 [2024-12-09 05:28:10.345942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 531239 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 531239 00:33:16.447 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:16.704 05:28:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 531239 00:33:16.704 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 531239 ']' 00:33:16.704 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 531239 00:33:16.704 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531239 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531239' 00:33:16.961 killing process with pid 531239 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 531239 00:33:16.961 05:28:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 531239 00:33:17.219 00:33:17.219 real 0m1.259s 00:33:17.219 user 0m1.216s 00:33:17.219 sys 0m0.513s 00:33:17.219 05:28:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.219 05:28:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:17.219 ************************************ 00:33:17.219 END TEST default_locks_via_rpc 00:33:17.219 ************************************ 00:33:17.477 05:28:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:33:17.477 05:28:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.477 05:28:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.478 05:28:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:17.478 ************************************ 00:33:17.478 START TEST non_locking_app_on_locked_coremask 00:33:17.478 ************************************ 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=531402 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 531402 /var/tmp/spdk.sock 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 531402 ']' 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.478 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:17.478 [2024-12-09 05:28:11.533535] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:17.478 [2024-12-09 05:28:11.533631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531402 ] 00:33:17.478 [2024-12-09 05:28:11.599864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.478 [2024-12-09 05:28:11.657037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=531501 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 531501 /var/tmp/spdk2.sock 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 531501 ']' 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:17.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.735 05:28:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:17.993 [2024-12-09 05:28:11.971781] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:17.993 [2024-12-09 05:28:11.971877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531501 ] 00:33:17.993 [2024-12-09 05:28:12.070036] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:17.993 [2024-12-09 05:28:12.070072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.993 [2024-12-09 05:28:12.181615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.924 05:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.924 05:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:18.924 05:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 531402 00:33:18.924 05:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 531402 00:33:18.924 05:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:19.181 lslocks: write error 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 531402 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 531402 ']' 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 531402 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.181 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531402 00:33:19.438 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:19.438 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:19.438 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531402' 00:33:19.438 killing process with pid 531402 00:33:19.438 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 531402 00:33:19.438 05:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 531402 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 531501 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 531501 ']' 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 531501 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531501 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531501' 00:33:20.368 killing process with pid 531501 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 531501 00:33:20.368 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 531501 00:33:20.627 00:33:20.627 real 0m3.328s 00:33:20.627 user 0m3.585s 00:33:20.627 sys 0m1.007s 00:33:20.627 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.627 05:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:20.627 ************************************ 00:33:20.627 END TEST non_locking_app_on_locked_coremask 00:33:20.627 ************************************ 00:33:20.627 05:28:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:33:20.627 05:28:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.627 05:28:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.627 05:28:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:20.885 ************************************ 00:33:20.885 START TEST locking_app_on_unlocked_coremask 00:33:20.885 ************************************ 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=531836 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 531836 /var/tmp/spdk.sock 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 531836 ']' 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.885 05:28:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:20.885 [2024-12-09 05:28:14.911536] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:20.885 [2024-12-09 05:28:14.911631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531836 ] 00:33:20.885 [2024-12-09 05:28:14.974678] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:20.885 [2024-12-09 05:28:14.974712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.885 [2024-12-09 05:28:15.028881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=531926 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 531926 /var/tmp/spdk2.sock 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 531926 ']' 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:21.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.142 05:28:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:21.142 [2024-12-09 05:28:15.343670] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:21.142 [2024-12-09 05:28:15.343758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531926 ] 00:33:21.400 [2024-12-09 05:28:15.446830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.400 [2024-12-09 05:28:15.566315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.333 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.333 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:22.333 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 531926 00:33:22.333 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 531926 00:33:22.333 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:22.591 lslocks: write error 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 531836 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 531836 ']' 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 531836 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531836 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531836' 00:33:22.591 killing process with pid 531836 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 531836 00:33:22.591 05:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 531836 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 531926 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 531926 ']' 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 531926 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531926 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531926' 00:33:23.524 killing process with pid 531926 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 531926 00:33:23.524 05:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 531926 00:33:24.091 00:33:24.091 real 0m3.329s 00:33:24.091 user 0m3.595s 00:33:24.091 sys 0m1.000s 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:24.091 ************************************ 00:33:24.091 END TEST locking_app_on_unlocked_coremask 00:33:24.091 ************************************ 00:33:24.091 05:28:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:33:24.091 05:28:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:24.091 05:28:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.091 05:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:24.091 ************************************ 00:33:24.091 START TEST locking_app_on_locked_coremask 00:33:24.091 ************************************ 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=532270 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 532270 /var/tmp/spdk.sock 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 532270 ']' 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.091 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:24.091 [2024-12-09 05:28:18.295459] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:24.091 [2024-12-09 05:28:18.295541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532270 ] 00:33:24.350 [2024-12-09 05:28:18.359134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.350 [2024-12-09 05:28:18.414031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=532403 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 532403 /var/tmp/spdk2.sock 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 532403 /var/tmp/spdk2.sock 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 532403 /var/tmp/spdk2.sock 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 532403 ']' 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:24.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.609 05:28:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:24.609 [2024-12-09 05:28:18.725233] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:24.609 [2024-12-09 05:28:18.725355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532403 ] 00:33:24.609 [2024-12-09 05:28:18.827473] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 532270 has claimed it. 00:33:24.609 [2024-12-09 05:28:18.827535] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:33:25.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (532403) - No such process 00:33:25.543 ERROR: process (pid: 532403) is no longer running 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 532270 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 532270 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:25.543 lslocks: write error 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 532270 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 532270 ']' 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 532270 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.543 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532270 00:33:25.800 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:25.800 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:25.800 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532270' 00:33:25.800 killing process with pid 532270 00:33:25.800 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 532270 00:33:25.800 05:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 532270 00:33:26.059 00:33:26.059 real 0m2.011s 00:33:26.059 user 0m2.222s 00:33:26.059 sys 0m0.637s 00:33:26.059 05:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.059 05:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:26.059 ************************************ 00:33:26.059 END TEST locking_app_on_locked_coremask 00:33:26.059 ************************************ 00:33:26.059 05:28:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:33:26.059 05:28:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.059 05:28:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.059 05:28:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:26.317 ************************************ 00:33:26.317 START TEST locking_overlapped_coremask 00:33:26.317 ************************************ 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=532574 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 532574 /var/tmp/spdk.sock 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 532574 ']' 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.317 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:26.317 [2024-12-09 05:28:20.362720] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:26.317 [2024-12-09 05:28:20.362816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532574 ] 00:33:26.317 [2024-12-09 05:28:20.429562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:26.317 [2024-12-09 05:28:20.485264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.317 [2024-12-09 05:28:20.485395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.317 [2024-12-09 05:28:20.485400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=532583 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 532583 /var/tmp/spdk2.sock 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 532583 /var/tmp/spdk2.sock 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 532583 /var/tmp/spdk2.sock 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 532583 ']' 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:26.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.577 05:28:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:26.833 [2024-12-09 05:28:20.819543] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:26.833 [2024-12-09 05:28:20.819657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532583 ] 00:33:26.833 [2024-12-09 05:28:20.929591] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 532574 has claimed it. 00:33:26.833 [2024-12-09 05:28:20.929662] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:33:27.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (532583) - No such process 00:33:27.398 ERROR: process (pid: 532583) is no longer running 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 532574 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 532574 ']' 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 532574 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532574 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532574' 00:33:27.398 killing process with pid 532574 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 532574 00:33:27.398 05:28:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 532574 00:33:27.962 00:33:27.962 real 0m1.729s 00:33:27.962 user 0m4.744s 00:33:27.962 sys 0m0.481s 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:27.962 ************************************ 00:33:27.962 END TEST locking_overlapped_coremask 00:33:27.962 ************************************ 00:33:27.962 05:28:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:33:27.962 05:28:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.962 05:28:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.962 05:28:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:27.962 ************************************ 00:33:27.962 START TEST locking_overlapped_coremask_via_rpc 00:33:27.962 ************************************ 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=532866 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 532866 /var/tmp/spdk.sock 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 532866 ']' 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.962 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:27.962 [2024-12-09 05:28:22.144476] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:27.962 [2024-12-09 05:28:22.144587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532866 ] 00:33:28.219 [2024-12-09 05:28:22.214688] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:28.219 [2024-12-09 05:28:22.214730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:28.219 [2024-12-09 05:28:22.279294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.219 [2024-12-09 05:28:22.279365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.219 [2024-12-09 05:28:22.279370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=532877 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 532877 /var/tmp/spdk2.sock 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 532877 ']' 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:28.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:33:28.477 05:28:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:28.477 [2024-12-09 05:28:22.614149] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:28.477 [2024-12-09 05:28:22.614229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532877 ] 00:33:28.734 [2024-12-09 05:28:22.719300] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:28.734 [2024-12-09 05:28:22.719335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:28.734 [2024-12-09 05:28:22.840351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.734 [2024-12-09 05:28:22.840412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:28.734 [2024-12-09 05:28:22.840415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:29.663 [2024-12-09 05:28:23.591379] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 532866 has claimed it. 00:33:29.663 request: 00:33:29.663 { 00:33:29.663 "method": "framework_enable_cpumask_locks", 00:33:29.663 "req_id": 1 00:33:29.663 } 00:33:29.663 Got JSON-RPC error response 00:33:29.663 response: 00:33:29.663 { 00:33:29.663 "code": -32603, 00:33:29.663 "message": "Failed to claim CPU core: 2" 00:33:29.663 } 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 532866 /var/tmp/spdk.sock 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 532866 ']' 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 532877 /var/tmp/spdk2.sock 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 532877 ']' 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:29.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.663 05:28:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:33:29.921 00:33:29.921 real 0m2.037s 00:33:29.921 user 0m1.133s 00:33:29.921 sys 0m0.162s 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.921 05:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:29.921 ************************************ 00:33:29.921 END TEST locking_overlapped_coremask_via_rpc 00:33:29.921 ************************************ 00:33:29.921 05:28:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:33:29.921 05:28:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 532866 ]] 00:33:29.921 05:28:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 532866 00:33:29.921 05:28:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 532866 ']' 00:33:29.921 05:28:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 532866 00:33:29.921 05:28:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532866 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532866' 00:33:30.178 killing process with pid 532866 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 532866 00:33:30.178 05:28:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 532866 00:33:30.435 05:28:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 532877 ]] 00:33:30.435 05:28:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 532877 00:33:30.435 05:28:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 532877 ']' 00:33:30.435 05:28:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 532877 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532877 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532877' 00:33:30.693 killing process with pid 532877 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 532877 00:33:30.693 05:28:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 532877 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 532866 ]] 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 532866 00:33:30.951 05:28:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 532866 ']' 00:33:30.951 05:28:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 532866 00:33:30.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (532866) - No such process 00:33:30.951 05:28:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 532866 is not found' 00:33:30.951 Process with pid 532866 is not found 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 532877 ]] 00:33:30.951 05:28:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 532877 00:33:30.951 05:28:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 532877 ']' 00:33:31.209 05:28:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 532877 00:33:31.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (532877) - No such process 00:33:31.209 05:28:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 532877 is not found' 00:33:31.209 Process with pid 532877 is not found 00:33:31.209 05:28:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:33:31.209 00:33:31.209 real 0m16.520s 00:33:31.209 user 0m29.553s 00:33:31.209 sys 0m5.284s 00:33:31.209 05:28:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.209 05:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:31.209 ************************************ 00:33:31.209 END TEST cpu_locks 00:33:31.209 ************************************ 00:33:31.209 00:33:31.209 real 0m41.335s 00:33:31.209 user 1m20.185s 00:33:31.209 sys 0m9.409s 00:33:31.209 05:28:25 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.209 05:28:25 event -- common/autotest_common.sh@10 -- # set +x 00:33:31.209 ************************************ 00:33:31.209 END TEST event 00:33:31.209 ************************************ 00:33:31.209 05:28:25 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:33:31.209 05:28:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.209 05:28:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.209 05:28:25 -- common/autotest_common.sh@10 -- # set +x 00:33:31.209 ************************************ 00:33:31.209 START TEST thread 00:33:31.209 ************************************ 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:33:31.209 * Looking for test storage... 00:33:31.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.209 05:28:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.209 05:28:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.209 05:28:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.209 05:28:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.209 05:28:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.209 05:28:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.209 05:28:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.209 05:28:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.209 05:28:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.209 05:28:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.209 05:28:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.209 05:28:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:33:31.209 05:28:25 thread -- scripts/common.sh@345 -- # : 1 00:33:31.209 05:28:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.209 05:28:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.209 05:28:25 thread -- scripts/common.sh@365 -- # decimal 1 00:33:31.209 05:28:25 thread -- scripts/common.sh@353 -- # local d=1 00:33:31.209 05:28:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.209 05:28:25 thread -- scripts/common.sh@355 -- # echo 1 00:33:31.209 05:28:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.209 05:28:25 thread -- scripts/common.sh@366 -- # decimal 2 00:33:31.209 05:28:25 thread -- scripts/common.sh@353 -- # local d=2 00:33:31.209 05:28:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.209 05:28:25 thread -- scripts/common.sh@355 -- # echo 2 00:33:31.209 05:28:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.209 05:28:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.209 05:28:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.209 05:28:25 thread -- scripts/common.sh@368 -- # return 0 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.209 --rc genhtml_branch_coverage=1 00:33:31.209 --rc genhtml_function_coverage=1 00:33:31.209 --rc genhtml_legend=1 00:33:31.209 --rc geninfo_all_blocks=1 00:33:31.209 --rc geninfo_unexecuted_blocks=1 00:33:31.209 00:33:31.209 ' 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.209 --rc genhtml_branch_coverage=1 00:33:31.209 --rc genhtml_function_coverage=1 00:33:31.209 --rc genhtml_legend=1 00:33:31.209 --rc geninfo_all_blocks=1 00:33:31.209 --rc geninfo_unexecuted_blocks=1 00:33:31.209 00:33:31.209 ' 00:33:31.209 05:28:25 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.209 --rc genhtml_branch_coverage=1 00:33:31.209 --rc genhtml_function_coverage=1 00:33:31.210 --rc genhtml_legend=1 00:33:31.210 --rc geninfo_all_blocks=1 00:33:31.210 --rc geninfo_unexecuted_blocks=1 00:33:31.210 00:33:31.210 ' 00:33:31.210 05:28:25 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.210 --rc genhtml_branch_coverage=1 00:33:31.210 --rc genhtml_function_coverage=1 00:33:31.210 --rc genhtml_legend=1 00:33:31.210 --rc geninfo_all_blocks=1 00:33:31.210 --rc geninfo_unexecuted_blocks=1 00:33:31.210 00:33:31.210 ' 00:33:31.210 05:28:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:33:31.210 05:28:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:33:31.210 05:28:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.210 05:28:25 thread -- common/autotest_common.sh@10 -- # set +x 00:33:31.210 ************************************ 00:33:31.210 START TEST thread_poller_perf 00:33:31.210 ************************************ 00:33:31.210 05:28:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:33:31.210 [2024-12-09 05:28:25.422843] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:31.210 [2024-12-09 05:28:25.422905] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533379 ] 00:33:31.467 [2024-12-09 05:28:25.490230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.467 [2024-12-09 05:28:25.548952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.467 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:33:32.841 [2024-12-09T04:28:27.066Z] ====================================== 00:33:32.841 [2024-12-09T04:28:27.066Z] busy:2708646522 (cyc) 00:33:32.841 [2024-12-09T04:28:27.066Z] total_run_count: 363000 00:33:32.841 [2024-12-09T04:28:27.066Z] tsc_hz: 2700000000 (cyc) 00:33:32.841 [2024-12-09T04:28:27.066Z] ====================================== 00:33:32.841 [2024-12-09T04:28:27.066Z] poller_cost: 7461 (cyc), 2763 (nsec) 00:33:32.841 00:33:32.841 real 0m1.247s 00:33:32.841 user 0m1.174s 00:33:32.841 sys 0m0.067s 00:33:32.841 05:28:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.841 05:28:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:33:32.841 ************************************ 00:33:32.841 END TEST thread_poller_perf 00:33:32.841 ************************************ 00:33:32.841 05:28:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:33:32.841 05:28:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:33:32.841 05:28:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.841 05:28:26 thread -- common/autotest_common.sh@10 -- # set +x 00:33:32.841 ************************************ 00:33:32.841 START TEST thread_poller_perf 00:33:32.841 ************************************ 00:33:32.841 05:28:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:33:32.841 [2024-12-09 05:28:26.722253] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:32.841 [2024-12-09 05:28:26.722341] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533534 ] 00:33:32.841 [2024-12-09 05:28:26.788129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.841 [2024-12-09 05:28:26.841451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.841 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:33:33.778 [2024-12-09T04:28:28.003Z] ====================================== 00:33:33.778 [2024-12-09T04:28:28.003Z] busy:2702046297 (cyc) 00:33:33.778 [2024-12-09T04:28:28.003Z] total_run_count: 4826000 00:33:33.778 [2024-12-09T04:28:28.003Z] tsc_hz: 2700000000 (cyc) 00:33:33.778 [2024-12-09T04:28:28.003Z] ====================================== 00:33:33.778 [2024-12-09T04:28:28.003Z] poller_cost: 559 (cyc), 207 (nsec) 00:33:33.778 00:33:33.778 real 0m1.235s 00:33:33.778 user 0m1.157s 00:33:33.778 sys 0m0.073s 00:33:33.778 05:28:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.778 05:28:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:33:33.778 ************************************ 00:33:33.778 END TEST thread_poller_perf 00:33:33.778 ************************************ 00:33:33.778 05:28:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:33:33.778 00:33:33.778 real 0m2.720s 00:33:33.778 user 0m2.460s 00:33:33.778 sys 0m0.261s 00:33:33.778 05:28:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.778 05:28:27 thread -- common/autotest_common.sh@10 -- # set +x 00:33:33.778 ************************************ 00:33:33.778 END TEST thread 00:33:33.778 ************************************ 00:33:33.778 05:28:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:33:33.778 05:28:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:33:33.778 05:28:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.778 05:28:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.778 05:28:27 -- common/autotest_common.sh@10 -- # set +x 00:33:34.038 ************************************ 00:33:34.038 START TEST app_cmdline 00:33:34.038 ************************************ 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:33:34.038 * Looking for test storage... 00:33:34.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.038 05:28:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.038 --rc genhtml_branch_coverage=1 00:33:34.038 --rc genhtml_function_coverage=1 00:33:34.038 --rc genhtml_legend=1 00:33:34.038 --rc geninfo_all_blocks=1 00:33:34.038 --rc geninfo_unexecuted_blocks=1 00:33:34.038 00:33:34.038 ' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.038 --rc genhtml_branch_coverage=1 00:33:34.038 --rc genhtml_function_coverage=1 00:33:34.038 --rc genhtml_legend=1 00:33:34.038 --rc geninfo_all_blocks=1 00:33:34.038 --rc geninfo_unexecuted_blocks=1 00:33:34.038 00:33:34.038 ' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.038 --rc genhtml_branch_coverage=1 00:33:34.038 --rc genhtml_function_coverage=1 00:33:34.038 --rc genhtml_legend=1 00:33:34.038 --rc geninfo_all_blocks=1 00:33:34.038 --rc geninfo_unexecuted_blocks=1 00:33:34.038 00:33:34.038 ' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.038 --rc genhtml_branch_coverage=1 00:33:34.038 --rc genhtml_function_coverage=1 00:33:34.038 --rc genhtml_legend=1 00:33:34.038 --rc geninfo_all_blocks=1 00:33:34.038 --rc geninfo_unexecuted_blocks=1 00:33:34.038 00:33:34.038 ' 00:33:34.038 05:28:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:33:34.038 05:28:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=533733 00:33:34.038 05:28:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:33:34.038 05:28:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 533733 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 533733 ']' 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.038 05:28:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:34.039 [2024-12-09 05:28:28.215452] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:34.039 [2024-12-09 05:28:28.215541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid533733 ] 00:33:34.297 [2024-12-09 05:28:28.281529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.297 [2024-12-09 05:28:28.338791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.556 05:28:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.556 05:28:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:33:34.556 05:28:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:33:34.815 { 00:33:34.815 "version": "SPDK v25.01-pre git sha1 66902d69a", 00:33:34.815 "fields": { 00:33:34.815 "major": 25, 00:33:34.815 "minor": 1, 00:33:34.815 "patch": 0, 00:33:34.815 "suffix": "-pre", 00:33:34.815 "commit": "66902d69a" 00:33:34.815 } 00:33:34.815 } 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:33:34.815 05:28:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:34.815 05:28:28 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:35.073 request: 00:33:35.073 { 00:33:35.073 "method": "env_dpdk_get_mem_stats", 00:33:35.073 "req_id": 1 00:33:35.073 } 00:33:35.073 Got JSON-RPC error response 00:33:35.073 response: 00:33:35.073 { 00:33:35.073 "code": -32601, 00:33:35.073 "message": "Method not found" 00:33:35.073 } 00:33:35.073 05:28:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:35.074 05:28:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 533733 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 533733 ']' 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 533733 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 533733 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 533733' 00:33:35.074 killing process with pid 533733 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 533733 00:33:35.074 05:28:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 533733 00:33:35.640 00:33:35.640 real 0m1.666s 00:33:35.641 user 0m2.032s 00:33:35.641 sys 0m0.497s 00:33:35.641 05:28:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.641 05:28:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:35.641 ************************************ 00:33:35.641 END TEST app_cmdline 00:33:35.641 ************************************ 00:33:35.641 05:28:29 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:33:35.641 05:28:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:35.641 05:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.641 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:35.641 ************************************ 00:33:35.641 START TEST version 00:33:35.641 ************************************ 00:33:35.641 05:28:29 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:33:35.641 * Looking for test storage... 00:33:35.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:33:35.641 05:28:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:35.641 05:28:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:33:35.641 05:28:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:35.641 05:28:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:35.641 05:28:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.641 05:28:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.641 05:28:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.641 05:28:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.641 05:28:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.641 05:28:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.641 05:28:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.641 05:28:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.641 05:28:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.641 05:28:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.641 05:28:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.641 05:28:29 version -- scripts/common.sh@344 -- # case "$op" in 00:33:35.641 05:28:29 version -- scripts/common.sh@345 -- # : 1 00:33:35.641 05:28:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.641 05:28:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.641 05:28:29 version -- scripts/common.sh@365 -- # decimal 1 00:33:35.900 05:28:29 version -- scripts/common.sh@353 -- # local d=1 00:33:35.900 05:28:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.900 05:28:29 version -- scripts/common.sh@355 -- # echo 1 00:33:35.900 05:28:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.900 05:28:29 version -- scripts/common.sh@366 -- # decimal 2 00:33:35.900 05:28:29 version -- scripts/common.sh@353 -- # local d=2 00:33:35.900 05:28:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.900 05:28:29 version -- scripts/common.sh@355 -- # echo 2 00:33:35.900 05:28:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.900 05:28:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.900 05:28:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.900 05:28:29 version -- scripts/common.sh@368 -- # return 0 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.900 --rc genhtml_branch_coverage=1 00:33:35.900 --rc genhtml_function_coverage=1 00:33:35.900 --rc genhtml_legend=1 00:33:35.900 --rc geninfo_all_blocks=1 00:33:35.900 --rc geninfo_unexecuted_blocks=1 00:33:35.900 00:33:35.900 ' 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.900 --rc genhtml_branch_coverage=1 00:33:35.900 --rc genhtml_function_coverage=1 00:33:35.900 --rc genhtml_legend=1 00:33:35.900 --rc geninfo_all_blocks=1 00:33:35.900 --rc geninfo_unexecuted_blocks=1 00:33:35.900 00:33:35.900 ' 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.900 --rc genhtml_branch_coverage=1 00:33:35.900 --rc genhtml_function_coverage=1 00:33:35.900 --rc genhtml_legend=1 00:33:35.900 --rc geninfo_all_blocks=1 00:33:35.900 --rc geninfo_unexecuted_blocks=1 00:33:35.900 00:33:35.900 ' 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:35.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.900 --rc genhtml_branch_coverage=1 00:33:35.900 --rc genhtml_function_coverage=1 00:33:35.900 --rc genhtml_legend=1 00:33:35.900 --rc geninfo_all_blocks=1 00:33:35.900 --rc geninfo_unexecuted_blocks=1 00:33:35.900 00:33:35.900 ' 00:33:35.900 05:28:29 version -- app/version.sh@17 -- # get_header_version major 00:33:35.900 05:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # cut -f2 00:33:35.900 05:28:29 version -- app/version.sh@17 -- # major=25 00:33:35.900 05:28:29 version -- app/version.sh@18 -- # get_header_version minor 00:33:35.900 05:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # cut -f2 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:33:35.900 05:28:29 version -- app/version.sh@18 -- # minor=1 00:33:35.900 05:28:29 version -- app/version.sh@19 -- # get_header_version patch 00:33:35.900 05:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # cut -f2 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:33:35.900 05:28:29 version -- app/version.sh@19 -- # patch=0 00:33:35.900 05:28:29 version -- app/version.sh@20 -- # get_header_version suffix 00:33:35.900 05:28:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # cut -f2 00:33:35.900 05:28:29 version -- app/version.sh@14 -- # tr -d '"' 00:33:35.900 05:28:29 version -- app/version.sh@20 -- # suffix=-pre 00:33:35.900 05:28:29 version -- app/version.sh@22 -- # version=25.1 00:33:35.900 05:28:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:33:35.900 05:28:29 version -- app/version.sh@28 -- # version=25.1rc0 00:33:35.900 05:28:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:33:35.900 05:28:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:33:35.900 05:28:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:33:35.900 05:28:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:33:35.900 00:33:35.900 real 0m0.186s 00:33:35.900 user 0m0.123s 00:33:35.900 sys 0m0.087s 00:33:35.900 05:28:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.900 05:28:29 version -- common/autotest_common.sh@10 -- # set +x 00:33:35.900 ************************************ 00:33:35.900 END TEST version 00:33:35.901 ************************************ 00:33:35.901 05:28:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:33:35.901 05:28:29 -- spdk/autotest.sh@194 -- # uname -s 00:33:35.901 05:28:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:33:35.901 05:28:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:33:35.901 05:28:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:33:35.901 05:28:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:33:35.901 05:28:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.901 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:35.901 05:28:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:33:35.901 05:28:29 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:33:35.901 05:28:29 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:35.901 05:28:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:35.901 05:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.901 05:28:29 -- common/autotest_common.sh@10 -- # set +x 00:33:35.901 ************************************ 00:33:35.901 START TEST nvmf_tcp 00:33:35.901 ************************************ 00:33:35.901 05:28:29 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:35.901 * Looking for test storage... 00:33:35.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:35.901 05:28:30 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:35.901 05:28:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:35.901 05:28:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:35.901 05:28:30 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.901 05:28:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.160 05:28:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.160 --rc genhtml_branch_coverage=1 00:33:36.160 --rc genhtml_function_coverage=1 00:33:36.160 --rc genhtml_legend=1 00:33:36.160 --rc geninfo_all_blocks=1 00:33:36.160 --rc geninfo_unexecuted_blocks=1 00:33:36.160 00:33:36.160 ' 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.160 --rc genhtml_branch_coverage=1 00:33:36.160 --rc genhtml_function_coverage=1 00:33:36.160 --rc genhtml_legend=1 00:33:36.160 --rc geninfo_all_blocks=1 00:33:36.160 --rc geninfo_unexecuted_blocks=1 00:33:36.160 00:33:36.160 ' 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.160 --rc genhtml_branch_coverage=1 00:33:36.160 --rc genhtml_function_coverage=1 00:33:36.160 --rc genhtml_legend=1 00:33:36.160 --rc geninfo_all_blocks=1 00:33:36.160 --rc geninfo_unexecuted_blocks=1 00:33:36.160 00:33:36.160 ' 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.160 --rc genhtml_branch_coverage=1 00:33:36.160 --rc genhtml_function_coverage=1 00:33:36.160 --rc genhtml_legend=1 00:33:36.160 --rc geninfo_all_blocks=1 00:33:36.160 --rc geninfo_unexecuted_blocks=1 00:33:36.160 00:33:36.160 ' 00:33:36.160 05:28:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:33:36.160 05:28:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:36.160 05:28:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.160 05:28:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:36.160 ************************************ 00:33:36.160 START TEST nvmf_target_core 00:33:36.160 ************************************ 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:33:36.160 * Looking for test storage... 00:33:36.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.160 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.161 --rc genhtml_branch_coverage=1 00:33:36.161 --rc genhtml_function_coverage=1 00:33:36.161 --rc genhtml_legend=1 00:33:36.161 --rc geninfo_all_blocks=1 00:33:36.161 --rc geninfo_unexecuted_blocks=1 00:33:36.161 00:33:36.161 ' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.161 --rc genhtml_branch_coverage=1 00:33:36.161 --rc genhtml_function_coverage=1 00:33:36.161 --rc genhtml_legend=1 00:33:36.161 --rc geninfo_all_blocks=1 00:33:36.161 --rc geninfo_unexecuted_blocks=1 00:33:36.161 00:33:36.161 ' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.161 --rc genhtml_branch_coverage=1 00:33:36.161 --rc genhtml_function_coverage=1 00:33:36.161 --rc genhtml_legend=1 00:33:36.161 --rc geninfo_all_blocks=1 00:33:36.161 --rc geninfo_unexecuted_blocks=1 00:33:36.161 00:33:36.161 ' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.161 --rc genhtml_branch_coverage=1 00:33:36.161 --rc genhtml_function_coverage=1 00:33:36.161 --rc genhtml_legend=1 00:33:36.161 --rc geninfo_all_blocks=1 00:33:36.161 --rc geninfo_unexecuted_blocks=1 00:33:36.161 00:33:36.161 ' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:36.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:36.161 ************************************ 00:33:36.161 START TEST nvmf_abort 00:33:36.161 ************************************ 00:33:36.161 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:33:36.421 * Looking for test storage... 00:33:36.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.421 --rc genhtml_branch_coverage=1 00:33:36.421 --rc genhtml_function_coverage=1 00:33:36.421 --rc genhtml_legend=1 00:33:36.421 --rc geninfo_all_blocks=1 00:33:36.421 --rc geninfo_unexecuted_blocks=1 00:33:36.421 00:33:36.421 ' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.421 --rc genhtml_branch_coverage=1 00:33:36.421 --rc genhtml_function_coverage=1 00:33:36.421 --rc genhtml_legend=1 00:33:36.421 --rc geninfo_all_blocks=1 00:33:36.421 --rc geninfo_unexecuted_blocks=1 00:33:36.421 00:33:36.421 ' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.421 --rc genhtml_branch_coverage=1 00:33:36.421 --rc genhtml_function_coverage=1 00:33:36.421 --rc genhtml_legend=1 00:33:36.421 --rc geninfo_all_blocks=1 00:33:36.421 --rc geninfo_unexecuted_blocks=1 00:33:36.421 00:33:36.421 ' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.421 --rc genhtml_branch_coverage=1 00:33:36.421 --rc genhtml_function_coverage=1 00:33:36.421 --rc genhtml_legend=1 00:33:36.421 --rc geninfo_all_blocks=1 00:33:36.421 --rc geninfo_unexecuted_blocks=1 00:33:36.421 00:33:36.421 ' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:36.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:36.421 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.422 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:33:38.954 00:33:38.954 --- 10.0.0.2 ping statistics --- 00:33:38.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.954 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:38.954 00:33:38.954 --- 10.0.0.1 ping statistics --- 00:33:38.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.954 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=535827 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 535827 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 535827 ']' 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.954 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.954 [2024-12-09 05:28:32.884148] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:38.954 [2024-12-09 05:28:32.884239] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.954 [2024-12-09 05:28:32.957203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:38.954 [2024-12-09 05:28:33.018662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.954 [2024-12-09 05:28:33.018728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.954 [2024-12-09 05:28:33.018743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.954 [2024-12-09 05:28:33.018759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.954 [2024-12-09 05:28:33.018769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.954 [2024-12-09 05:28:33.020354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.954 [2024-12-09 05:28:33.020417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.954 [2024-12-09 05:28:33.020421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.954 [2024-12-09 05:28:33.167813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.954 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 Malloc0 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 Delay0 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 [2024-12-09 05:28:33.234735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.213 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:39.213 [2024-12-09 05:28:33.391369] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:41.741 Initializing NVMe Controllers 00:33:41.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:41.741 controller IO queue size 128 less than required 00:33:41.741 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:41.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:41.741 Initialization complete. Launching workers. 00:33:41.741 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28037 00:33:41.741 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28102, failed to submit 62 00:33:41.741 success 28041, unsuccessful 61, failed 0 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.741 rmmod nvme_tcp 00:33:41.741 rmmod nvme_fabrics 00:33:41.741 rmmod nvme_keyring 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 535827 ']' 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 535827 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 535827 ']' 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 535827 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 535827 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 535827' 00:33:41.741 killing process with pid 535827 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 535827 00:33:41.741 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 535827 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:42.071 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.088 00:33:44.088 real 0m7.678s 00:33:44.088 user 0m11.293s 00:33:44.088 sys 0m2.696s 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:44.088 ************************************ 00:33:44.088 END TEST nvmf_abort 00:33:44.088 ************************************ 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:44.088 ************************************ 00:33:44.088 START TEST nvmf_ns_hotplug_stress 00:33:44.088 ************************************ 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:33:44.088 * Looking for test storage... 00:33:44.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:44.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.088 --rc genhtml_branch_coverage=1 00:33:44.088 --rc genhtml_function_coverage=1 00:33:44.088 --rc genhtml_legend=1 00:33:44.088 --rc geninfo_all_blocks=1 00:33:44.088 --rc geninfo_unexecuted_blocks=1 00:33:44.088 00:33:44.088 ' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:44.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.088 --rc genhtml_branch_coverage=1 00:33:44.088 --rc genhtml_function_coverage=1 00:33:44.088 --rc genhtml_legend=1 00:33:44.088 --rc geninfo_all_blocks=1 00:33:44.088 --rc geninfo_unexecuted_blocks=1 00:33:44.088 00:33:44.088 ' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:44.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.088 --rc genhtml_branch_coverage=1 00:33:44.088 --rc genhtml_function_coverage=1 00:33:44.088 --rc genhtml_legend=1 00:33:44.088 --rc geninfo_all_blocks=1 00:33:44.088 --rc geninfo_unexecuted_blocks=1 00:33:44.088 00:33:44.088 ' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:44.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.088 --rc genhtml_branch_coverage=1 00:33:44.088 --rc genhtml_function_coverage=1 00:33:44.088 --rc genhtml_legend=1 00:33:44.088 --rc geninfo_all_blocks=1 00:33:44.088 --rc geninfo_unexecuted_blocks=1 00:33:44.088 00:33:44.088 ' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.088 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:44.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.089 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.626 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:33:46.627 00:33:46.627 --- 10.0.0.2 ping statistics --- 00:33:46.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.627 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:33:46.627 00:33:46.627 --- 10.0.0.1 ping statistics --- 00:33:46.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.627 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=538192 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 538192 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 538192 ']' 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 [2024-12-09 05:28:40.557439] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:33:46.627 [2024-12-09 05:28:40.557534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.627 [2024-12-09 05:28:40.636543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.627 [2024-12-09 05:28:40.699258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.627 [2024-12-09 05:28:40.699344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.627 [2024-12-09 05:28:40.699359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.627 [2024-12-09 05:28:40.699370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.627 [2024-12-09 05:28:40.699380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.627 [2024-12-09 05:28:40.702296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.627 [2024-12-09 05:28:40.702415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.627 [2024-12-09 05:28:40.706287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.627 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:46.885 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.885 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:46.885 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.143 [2024-12-09 05:28:41.122234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.143 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:47.401 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.658 [2024-12-09 05:28:41.652955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.658 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:47.916 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:48.174 Malloc0 00:33:48.174 05:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:48.432 Delay0 00:33:48.432 05:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:48.689 05:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:48.947 NULL1 00:33:48.947 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:49.205 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=538594 00:33:49.205 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:49.205 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:49.205 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.576 Read completed with error (sct=0, sc=11) 00:33:50.576 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:50.576 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:50.576 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:50.833 true 00:33:50.833 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:50.833 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.760 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.016 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:52.016 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:52.273 true 00:33:52.273 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:52.273 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.529 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:52.786 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:52.786 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:53.044 true 00:33:53.044 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:53.044 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.301 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:53.558 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:53.558 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:53.815 true 00:33:53.815 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:53.815 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:54.747 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:54.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:55.004 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:55.005 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:55.262 true 00:33:55.262 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:55.262 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:55.521 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:55.778 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:55.778 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:56.035 true 00:33:56.035 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:56.035 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:56.292 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:56.550 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:56.550 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:56.808 true 00:33:56.808 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:56.808 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.178 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:58.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:58.178 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:58.178 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:58.435 true 00:33:58.435 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:58.435 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.692 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:58.949 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:58.949 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:59.207 true 00:33:59.207 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:59.207 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:59.464 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:59.722 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:59.722 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:59.979 true 00:33:59.979 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:33:59.979 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.912 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:01.168 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:01.168 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:01.425 true 00:34:01.425 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:01.425 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.681 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:01.937 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:01.937 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:02.194 true 00:34:02.194 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:02.194 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.451 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:03.014 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:03.014 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:03.014 true 00:34:03.014 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:03.014 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.380 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:04.380 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:04.380 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:04.635 true 00:34:04.635 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:04.635 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.891 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:05.148 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:05.148 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:05.405 true 00:34:05.405 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:05.405 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:05.662 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.228 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:06.228 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:06.228 true 00:34:06.228 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:06.228 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:07.162 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:07.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:07.419 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:07.419 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:07.677 true 00:34:07.677 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:07.677 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:07.935 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:08.193 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:08.193 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:08.451 true 00:34:08.451 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:08.451 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:08.709 05:29:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:08.966 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:09.224 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:09.482 true 00:34:09.482 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:09.482 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:10.411 05:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:10.667 05:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:10.667 05:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:10.923 true 00:34:10.923 05:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:10.923 05:29:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:11.179 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:11.435 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:11.435 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:11.692 true 00:34:11.692 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:11.692 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:12.624 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:12.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:12.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:12.624 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:12.624 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:12.882 true 00:34:13.140 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:13.140 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:13.397 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:13.654 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:13.654 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:13.911 true 00:34:13.911 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:13.911 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:14.474 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:14.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.986 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:14.987 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:15.243 true 00:34:15.243 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:15.243 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:15.499 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:15.756 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:15.756 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:16.013 true 00:34:16.013 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:16.013 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:16.577 05:29:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.834 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:16.834 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:17.091 true 00:34:17.349 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:17.349 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:17.606 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:17.864 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:17.864 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:18.120 true 00:34:18.120 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:18.120 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:18.377 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:18.634 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:18.634 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:18.892 true 00:34:18.892 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:18.892 05:29:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:19.826 Initializing NVMe Controllers 00:34:19.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.826 Controller IO queue size 128, less than required. 00:34:19.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.826 Controller IO queue size 128, less than required. 00:34:19.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:19.826 Initialization complete. Launching workers. 00:34:19.826 ======================================================== 00:34:19.826 Latency(us) 00:34:19.826 Device Information : IOPS MiB/s Average min max 00:34:19.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 489.93 0.24 114942.60 3017.70 1014224.18 00:34:19.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8867.31 4.33 14435.63 3344.13 453011.33 00:34:19.826 ======================================================== 00:34:19.826 Total : 9357.24 4.57 19698.04 3017.70 1014224.18 00:34:19.826 00:34:19.826 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:20.085 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:20.085 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:20.343 true 00:34:20.343 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 538594 00:34:20.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (538594) - No such process 00:34:20.343 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 538594 00:34:20.343 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:20.601 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:20.860 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:20.860 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:20.860 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:20.860 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:20.860 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:21.117 null0 00:34:21.117 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:21.117 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:21.117 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:21.374 null1 00:34:21.374 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:21.374 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:21.374 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:21.630 null2 00:34:21.630 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:21.630 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:21.630 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:21.887 null3 00:34:22.144 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:22.144 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:22.144 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:22.144 null4 00:34:22.401 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:22.401 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:22.401 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:22.658 null5 00:34:22.658 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:22.658 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:22.658 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:22.915 null6 00:34:22.915 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:22.915 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:22.915 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:23.174 null7 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.174 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 542688 542689 542692 542694 542697 542699 542701 542703 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.175 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:23.432 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:23.690 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:23.948 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:24.207 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:24.770 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.026 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:25.026 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.026 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:25.283 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:25.540 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.541 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:25.541 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:25.541 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:25.541 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:25.798 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.055 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:26.312 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:26.568 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.568 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.824 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.825 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:26.825 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:26.825 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:26.825 05:29:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.081 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.337 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:27.594 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:27.850 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:27.851 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:28.108 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:28.365 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.623 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:28.624 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:28.883 05:29:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:29.140 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.141 rmmod nvme_tcp 00:34:29.141 rmmod nvme_fabrics 00:34:29.141 rmmod nvme_keyring 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 538192 ']' 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 538192 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 538192 ']' 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 538192 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 538192 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 538192' 00:34:29.141 killing process with pid 538192 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 538192 00:34:29.141 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 538192 00:34:29.399 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.399 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.400 05:29:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.939 00:34:31.939 real 0m47.550s 00:34:31.939 user 3m41.092s 00:34:31.939 sys 0m15.825s 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:31.939 ************************************ 00:34:31.939 END TEST nvmf_ns_hotplug_stress 00:34:31.939 ************************************ 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:31.939 ************************************ 00:34:31.939 START TEST nvmf_delete_subsystem 00:34:31.939 ************************************ 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:34:31.939 * Looking for test storage... 00:34:31.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:31.939 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:31.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.940 --rc genhtml_branch_coverage=1 00:34:31.940 --rc genhtml_function_coverage=1 00:34:31.940 --rc genhtml_legend=1 00:34:31.940 --rc geninfo_all_blocks=1 00:34:31.940 --rc geninfo_unexecuted_blocks=1 00:34:31.940 00:34:31.940 ' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:31.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.940 --rc genhtml_branch_coverage=1 00:34:31.940 --rc genhtml_function_coverage=1 00:34:31.940 --rc genhtml_legend=1 00:34:31.940 --rc geninfo_all_blocks=1 00:34:31.940 --rc geninfo_unexecuted_blocks=1 00:34:31.940 00:34:31.940 ' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:31.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.940 --rc genhtml_branch_coverage=1 00:34:31.940 --rc genhtml_function_coverage=1 00:34:31.940 --rc genhtml_legend=1 00:34:31.940 --rc geninfo_all_blocks=1 00:34:31.940 --rc geninfo_unexecuted_blocks=1 00:34:31.940 00:34:31.940 ' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:31.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.940 --rc genhtml_branch_coverage=1 00:34:31.940 --rc genhtml_function_coverage=1 00:34:31.940 --rc genhtml_legend=1 00:34:31.940 --rc geninfo_all_blocks=1 00:34:31.940 --rc geninfo_unexecuted_blocks=1 00:34:31.940 00:34:31.940 ' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:31.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:31.940 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:31.941 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:33.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:33.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:33.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:33.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:33.990 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.991 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:33.991 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:33.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:34:33.991 00:34:33.991 --- 10.0.0.2 ping statistics --- 00:34:33.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.991 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:34:34.249 00:34:34.249 --- 10.0.0.1 ping statistics --- 00:34:34.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.249 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.249 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=545598 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 545598 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 545598 ']' 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.250 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.250 [2024-12-09 05:29:28.295615] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:34:34.250 [2024-12-09 05:29:28.295708] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.250 [2024-12-09 05:29:28.368040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:34.250 [2024-12-09 05:29:28.426992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.250 [2024-12-09 05:29:28.427061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.250 [2024-12-09 05:29:28.427075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.250 [2024-12-09 05:29:28.427086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.250 [2024-12-09 05:29:28.427097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.250 [2024-12-09 05:29:28.428529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.250 [2024-12-09 05:29:28.428535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 [2024-12-09 05:29:28.583762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 [2024-12-09 05:29:28.599940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 NULL1 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 Delay0 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=545624 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:34.508 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:34.508 [2024-12-09 05:29:28.684731] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:37.033 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.033 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.033 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 starting I/O failed: -6 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 [2024-12-09 05:29:30.895717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed98000c40 is same with the state(6) to be set 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Write completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.033 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 [2024-12-09 05:29:30.896879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed9800d4b0 is same with the state(6) to be set 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 starting I/O failed: -6 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 [2024-12-09 05:29:30.897386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9d860 is same with the state(6) to be set 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Write completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.034 Read completed with error (sct=0, sc=8) 00:34:37.964 [2024-12-09 05:29:31.862711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9e9b0 is same with the state(6) to be set 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 [2024-12-09 05:29:31.898718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9d680 is same with the state(6) to be set 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 Write completed with error (sct=0, sc=8) 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.964 [2024-12-09 05:29:31.899994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9d2c0 is same with the state(6) to be set 00:34:37.964 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 [2024-12-09 05:29:31.900482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed9800d020 is same with the state(6) to be set 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Write completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 Read completed with error (sct=0, sc=8) 00:34:37.965 [2024-12-09 05:29:31.900641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed9800d7e0 is same with the state(6) to be set 00:34:37.965 Initializing NVMe Controllers 00:34:37.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:37.965 Controller IO queue size 128, less than required. 00:34:37.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:37.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:37.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:37.965 Initialization complete. Launching workers. 00:34:37.965 ======================================================== 00:34:37.965 Latency(us) 00:34:37.965 Device Information : IOPS MiB/s Average min max 00:34:37.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.69 0.08 969733.82 435.14 2005145.14 00:34:37.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.75 0.08 971534.34 752.59 2005982.05 00:34:37.965 ======================================================== 00:34:37.965 Total : 320.44 0.16 970614.57 435.14 2005982.05 00:34:37.965 00:34:37.965 [2024-12-09 05:29:31.901622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9e9b0 (9): Bad file descriptor 00:34:37.965 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.965 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:37.965 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 545624 00:34:37.965 05:29:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:37.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 545624 00:34:38.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (545624) - No such process 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 545624 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 545624 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 545624 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:38.221 [2024-12-09 05:29:32.424624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=546152 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:38.221 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:38.477 [2024-12-09 05:29:32.487191] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:38.734 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:38.734 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:38.734 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:39.295 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:39.295 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:39.295 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:39.858 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:39.858 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:39.858 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:40.423 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:40.423 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:40.423 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:40.986 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:40.986 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:40.986 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:41.243 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:41.243 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:41.243 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:41.500 Initializing NVMe Controllers 00:34:41.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:41.500 Controller IO queue size 128, less than required. 00:34:41.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:41.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:41.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:41.500 Initialization complete. Launching workers. 00:34:41.500 ======================================================== 00:34:41.500 Latency(us) 00:34:41.500 Device Information : IOPS MiB/s Average min max 00:34:41.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004434.75 1000211.34 1014428.55 00:34:41.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004600.44 1000197.24 1012451.17 00:34:41.500 ======================================================== 00:34:41.500 Total : 256.00 0.12 1004517.60 1000197.24 1014428.55 00:34:41.500 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 546152 00:34:41.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (546152) - No such process 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 546152 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.757 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.757 rmmod nvme_tcp 00:34:42.015 rmmod nvme_fabrics 00:34:42.015 rmmod nvme_keyring 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 545598 ']' 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 545598 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 545598 ']' 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 545598 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 545598 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 545598' 00:34:42.015 killing process with pid 545598 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 545598 00:34:42.015 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 545598 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.276 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.185 00:34:44.185 real 0m12.671s 00:34:44.185 user 0m28.167s 00:34:44.185 sys 0m3.055s 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:44.185 ************************************ 00:34:44.185 END TEST nvmf_delete_subsystem 00:34:44.185 ************************************ 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.185 05:29:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:44.468 ************************************ 00:34:44.468 START TEST nvmf_host_management 00:34:44.468 ************************************ 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:34:44.468 * Looking for test storage... 00:34:44.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.468 --rc genhtml_branch_coverage=1 00:34:44.468 --rc genhtml_function_coverage=1 00:34:44.468 --rc genhtml_legend=1 00:34:44.468 --rc geninfo_all_blocks=1 00:34:44.468 --rc geninfo_unexecuted_blocks=1 00:34:44.468 00:34:44.468 ' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.468 --rc genhtml_branch_coverage=1 00:34:44.468 --rc genhtml_function_coverage=1 00:34:44.468 --rc genhtml_legend=1 00:34:44.468 --rc geninfo_all_blocks=1 00:34:44.468 --rc geninfo_unexecuted_blocks=1 00:34:44.468 00:34:44.468 ' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.468 --rc genhtml_branch_coverage=1 00:34:44.468 --rc genhtml_function_coverage=1 00:34:44.468 --rc genhtml_legend=1 00:34:44.468 --rc geninfo_all_blocks=1 00:34:44.468 --rc geninfo_unexecuted_blocks=1 00:34:44.468 00:34:44.468 ' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.468 --rc genhtml_branch_coverage=1 00:34:44.468 --rc genhtml_function_coverage=1 00:34:44.468 --rc genhtml_legend=1 00:34:44.468 --rc geninfo_all_blocks=1 00:34:44.468 --rc geninfo_unexecuted_blocks=1 00:34:44.468 00:34:44.468 ' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.468 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:44.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.469 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:47.001 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:34:47.002 00:34:47.002 --- 10.0.0.2 ping statistics --- 00:34:47.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.002 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:47.002 00:34:47.002 --- 10.0.0.1 ping statistics --- 00:34:47.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.002 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:47.002 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=548509 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 548509 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 548509 ']' 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.003 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.003 [2024-12-09 05:29:41.027807] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:34:47.003 [2024-12-09 05:29:41.027899] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.003 [2024-12-09 05:29:41.102731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:47.003 [2024-12-09 05:29:41.163877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.003 [2024-12-09 05:29:41.163941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.003 [2024-12-09 05:29:41.163954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.003 [2024-12-09 05:29:41.163964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.003 [2024-12-09 05:29:41.163974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.003 [2024-12-09 05:29:41.165565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:47.003 [2024-12-09 05:29:41.165613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:47.003 [2024-12-09 05:29:41.165697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.003 [2024-12-09 05:29:41.165693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 [2024-12-09 05:29:41.305943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 Malloc0 00:34:47.261 [2024-12-09 05:29:41.382027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=548572 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 548572 /var/tmp/bdevperf.sock 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 548572 ']' 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:47.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:47.261 { 00:34:47.261 "params": { 00:34:47.261 "name": "Nvme$subsystem", 00:34:47.261 "trtype": "$TEST_TRANSPORT", 00:34:47.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.261 "adrfam": "ipv4", 00:34:47.261 "trsvcid": "$NVMF_PORT", 00:34:47.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.261 "hdgst": ${hdgst:-false}, 00:34:47.261 "ddgst": ${ddgst:-false} 00:34:47.261 }, 00:34:47.261 "method": "bdev_nvme_attach_controller" 00:34:47.261 } 00:34:47.261 EOF 00:34:47.261 )") 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:47.261 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:47.261 "params": { 00:34:47.261 "name": "Nvme0", 00:34:47.261 "trtype": "tcp", 00:34:47.261 "traddr": "10.0.0.2", 00:34:47.261 "adrfam": "ipv4", 00:34:47.261 "trsvcid": "4420", 00:34:47.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:47.261 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:47.261 "hdgst": false, 00:34:47.261 "ddgst": false 00:34:47.261 }, 00:34:47.261 "method": "bdev_nvme_attach_controller" 00:34:47.261 }' 00:34:47.261 [2024-12-09 05:29:41.463805] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:34:47.261 [2024-12-09 05:29:41.463900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548572 ] 00:34:47.519 [2024-12-09 05:29:41.537670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.519 [2024-12-09 05:29:41.597569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.776 Running I/O for 10 seconds... 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:34:47.776 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.034 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:48.034 [2024-12-09 05:29:42.213171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.034 [2024-12-09 05:29:42.213471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.035 [2024-12-09 05:29:42.213483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.035 [2024-12-09 05:29:42.213495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.035 [2024-12-09 05:29:42.213517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x707f10 is same with the state(6) to be set 00:34:48.035 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.035 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:48.035 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.035 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:48.035 [2024-12-09 05:29:42.221158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:48.035 [2024-12-09 05:29:42.221202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:48.035 [2024-12-09 05:29:42.221235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:48.035 [2024-12-09 05:29:42.221263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:48.035 [2024-12-09 05:29:42.221327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c7a50 is same with the state(6) to be set 00:34:48.035 [2024-12-09 05:29:42.221676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.221975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.221989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.035 [2024-12-09 05:29:42.222685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.035 [2024-12-09 05:29:42.222700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.222977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.222992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.223650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.036 [2024-12-09 05:29:42.223663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:48.036 [2024-12-09 05:29:42.224854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:48.036 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.036 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:48.036 task offset: 81920 on job bdev=Nvme0n1 fails 00:34:48.036 00:34:48.036 Latency(us) 00:34:48.036 [2024-12-09T04:29:42.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.036 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:48.036 Job: Nvme0n1 ended in about 0.41 seconds with error 00:34:48.036 Verification LBA range: start 0x0 length 0x400 00:34:48.036 Nvme0n1 : 0.41 1577.71 98.61 157.77 0.00 35828.27 2912.71 34564.17 00:34:48.036 [2024-12-09T04:29:42.261Z] =================================================================================================================== 00:34:48.036 [2024-12-09T04:29:42.261Z] Total : 1577.71 98.61 157.77 0.00 35828.27 2912.71 34564.17 00:34:48.036 [2024-12-09 05:29:42.226790] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:48.036 [2024-12-09 05:29:42.226818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c7a50 (9): Bad file descriptor 00:34:48.036 [2024-12-09 05:29:42.237531] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 548572 00:34:49.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (548572) - No such process 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.405 { 00:34:49.405 "params": { 00:34:49.405 "name": "Nvme$subsystem", 00:34:49.405 "trtype": "$TEST_TRANSPORT", 00:34:49.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.405 "adrfam": "ipv4", 00:34:49.405 "trsvcid": "$NVMF_PORT", 00:34:49.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.405 "hdgst": ${hdgst:-false}, 00:34:49.405 "ddgst": ${ddgst:-false} 00:34:49.405 }, 00:34:49.405 "method": "bdev_nvme_attach_controller" 00:34:49.405 } 00:34:49.405 EOF 00:34:49.405 )") 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:49.405 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:49.405 "params": { 00:34:49.405 "name": "Nvme0", 00:34:49.405 "trtype": "tcp", 00:34:49.405 "traddr": "10.0.0.2", 00:34:49.405 "adrfam": "ipv4", 00:34:49.405 "trsvcid": "4420", 00:34:49.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:49.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:49.405 "hdgst": false, 00:34:49.405 "ddgst": false 00:34:49.405 }, 00:34:49.405 "method": "bdev_nvme_attach_controller" 00:34:49.405 }' 00:34:49.405 [2024-12-09 05:29:43.277673] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:34:49.405 [2024-12-09 05:29:43.277753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548833 ] 00:34:49.405 [2024-12-09 05:29:43.345856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.405 [2024-12-09 05:29:43.406530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.662 Running I/O for 1 seconds... 00:34:50.591 1631.00 IOPS, 101.94 MiB/s 00:34:50.591 Latency(us) 00:34:50.591 [2024-12-09T04:29:44.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.591 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:50.591 Verification LBA range: start 0x0 length 0x400 00:34:50.591 Nvme0n1 : 1.03 1671.61 104.48 0.00 0.00 37667.76 5849.69 33399.09 00:34:50.591 [2024-12-09T04:29:44.816Z] =================================================================================================================== 00:34:50.591 [2024-12-09T04:29:44.816Z] Total : 1671.61 104.48 0.00 0.00 37667.76 5849.69 33399.09 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.847 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.847 rmmod nvme_tcp 00:34:50.847 rmmod nvme_fabrics 00:34:50.847 rmmod nvme_keyring 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 548509 ']' 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 548509 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 548509 ']' 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 548509 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548509 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548509' 00:34:51.103 killing process with pid 548509 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 548509 00:34:51.103 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 548509 00:34:51.360 [2024-12-09 05:29:45.379484] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.360 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:53.266 00:34:53.266 real 0m9.038s 00:34:53.266 user 0m20.056s 00:34:53.266 sys 0m2.845s 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:53.266 ************************************ 00:34:53.266 END TEST nvmf_host_management 00:34:53.266 ************************************ 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.266 05:29:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:53.525 ************************************ 00:34:53.525 START TEST nvmf_lvol 00:34:53.525 ************************************ 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:34:53.525 * Looking for test storage... 00:34:53.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.525 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:53.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.526 --rc genhtml_branch_coverage=1 00:34:53.526 --rc genhtml_function_coverage=1 00:34:53.526 --rc genhtml_legend=1 00:34:53.526 --rc geninfo_all_blocks=1 00:34:53.526 --rc geninfo_unexecuted_blocks=1 00:34:53.526 00:34:53.526 ' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:53.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.526 --rc genhtml_branch_coverage=1 00:34:53.526 --rc genhtml_function_coverage=1 00:34:53.526 --rc genhtml_legend=1 00:34:53.526 --rc geninfo_all_blocks=1 00:34:53.526 --rc geninfo_unexecuted_blocks=1 00:34:53.526 00:34:53.526 ' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:53.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.526 --rc genhtml_branch_coverage=1 00:34:53.526 --rc genhtml_function_coverage=1 00:34:53.526 --rc genhtml_legend=1 00:34:53.526 --rc geninfo_all_blocks=1 00:34:53.526 --rc geninfo_unexecuted_blocks=1 00:34:53.526 00:34:53.526 ' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:53.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.526 --rc genhtml_branch_coverage=1 00:34:53.526 --rc genhtml_function_coverage=1 00:34:53.526 --rc genhtml_legend=1 00:34:53.526 --rc geninfo_all_blocks=1 00:34:53.526 --rc geninfo_unexecuted_blocks=1 00:34:53.526 00:34:53.526 ' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.526 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:56.064 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.064 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.064 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.064 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:56.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:56.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:56.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:56.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:34:56.065 00:34:56.065 --- 10.0.0.2 ping statistics --- 00:34:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.065 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:34:56.065 00:34:56.065 --- 10.0.0.1 ping statistics --- 00:34:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.065 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.065 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.066 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=551052 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 551052 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 551052 ']' 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.066 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:56.066 [2024-12-09 05:29:50.062063] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:34:56.066 [2024-12-09 05:29:50.062157] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.066 [2024-12-09 05:29:50.137253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:56.066 [2024-12-09 05:29:50.192311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.066 [2024-12-09 05:29:50.192384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.066 [2024-12-09 05:29:50.192407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.066 [2024-12-09 05:29:50.192418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.066 [2024-12-09 05:29:50.192427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.066 [2024-12-09 05:29:50.193897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.066 [2024-12-09 05:29:50.194009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.066 [2024-12-09 05:29:50.194022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.323 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:56.581 [2024-12-09 05:29:50.608931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.581 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:56.838 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:56.838 05:29:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:57.096 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:57.096 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:57.353 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:57.610 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7fb8b2fa-bdf6-4d3a-a5d7-7e875a0f3713 00:34:57.610 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fb8b2fa-bdf6-4d3a-a5d7-7e875a0f3713 lvol 20 00:34:57.889 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a0b9121a-6323-4310-8ade-054349e19375 00:34:57.889 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:58.146 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0b9121a-6323-4310-8ade-054349e19375 00:34:58.403 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.660 [2024-12-09 05:29:52.835292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.660 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:58.916 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=551482 00:34:58.916 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:58.916 05:29:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:00.286 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a0b9121a-6323-4310-8ade-054349e19375 MY_SNAPSHOT 00:35:00.286 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d0057363-1ea9-41d7-9fb8-e7d32003d4ab 00:35:00.286 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a0b9121a-6323-4310-8ade-054349e19375 30 00:35:00.850 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d0057363-1ea9-41d7-9fb8-e7d32003d4ab MY_CLONE 00:35:01.106 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8f2631b-374f-4623-ae99-4f36835ce1b7 00:35:01.106 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8f2631b-374f-4623-ae99-4f36835ce1b7 00:35:01.668 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 551482 00:35:09.769 Initializing NVMe Controllers 00:35:09.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:09.769 Controller IO queue size 128, less than required. 00:35:09.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:09.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:09.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:09.769 Initialization complete. Launching workers. 00:35:09.769 ======================================================== 00:35:09.769 Latency(us) 00:35:09.769 Device Information : IOPS MiB/s Average min max 00:35:09.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9956.80 38.89 12865.38 1583.80 75176.95 00:35:09.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10322.50 40.32 12399.91 2206.61 64459.90 00:35:09.769 ======================================================== 00:35:09.769 Total : 20279.30 79.22 12628.45 1583.80 75176.95 00:35:09.769 00:35:09.769 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.769 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0b9121a-6323-4310-8ade-054349e19375 00:35:10.026 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fb8b2fa-bdf6-4d3a-a5d7-7e875a0f3713 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.283 rmmod nvme_tcp 00:35:10.283 rmmod nvme_fabrics 00:35:10.283 rmmod nvme_keyring 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 551052 ']' 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 551052 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 551052 ']' 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 551052 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551052 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551052' 00:35:10.283 killing process with pid 551052 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 551052 00:35:10.283 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 551052 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.849 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.762 00:35:12.762 real 0m19.354s 00:35:12.762 user 1m5.321s 00:35:12.762 sys 0m5.759s 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:12.762 ************************************ 00:35:12.762 END TEST nvmf_lvol 00:35:12.762 ************************************ 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:35:12.762 ************************************ 00:35:12.762 START TEST nvmf_lvs_grow 00:35:12.762 ************************************ 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:35:12.762 * Looking for test storage... 00:35:12.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:35:12.762 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:13.019 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.020 --rc genhtml_branch_coverage=1 00:35:13.020 --rc genhtml_function_coverage=1 00:35:13.020 --rc genhtml_legend=1 00:35:13.020 --rc geninfo_all_blocks=1 00:35:13.020 --rc geninfo_unexecuted_blocks=1 00:35:13.020 00:35:13.020 ' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.020 --rc genhtml_branch_coverage=1 00:35:13.020 --rc genhtml_function_coverage=1 00:35:13.020 --rc genhtml_legend=1 00:35:13.020 --rc geninfo_all_blocks=1 00:35:13.020 --rc geninfo_unexecuted_blocks=1 00:35:13.020 00:35:13.020 ' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.020 --rc genhtml_branch_coverage=1 00:35:13.020 --rc genhtml_function_coverage=1 00:35:13.020 --rc genhtml_legend=1 00:35:13.020 --rc geninfo_all_blocks=1 00:35:13.020 --rc geninfo_unexecuted_blocks=1 00:35:13.020 00:35:13.020 ' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.020 --rc genhtml_branch_coverage=1 00:35:13.020 --rc genhtml_function_coverage=1 00:35:13.020 --rc genhtml_legend=1 00:35:13.020 --rc geninfo_all_blocks=1 00:35:13.020 --rc geninfo_unexecuted_blocks=1 00:35:13.020 00:35:13.020 ' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.020 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:15.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:15.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.552 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:15.553 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:15.553 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:15.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:35:15.553 00:35:15.553 --- 10.0.0.2 ping statistics --- 00:35:15.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.553 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:35:15.553 00:35:15.553 --- 10.0.0.1 ping statistics --- 00:35:15.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.553 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=555383 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 555383 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 555383 ']' 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:15.553 [2024-12-09 05:30:09.382777] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:15.553 [2024-12-09 05:30:09.382859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.553 [2024-12-09 05:30:09.454449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.553 [2024-12-09 05:30:09.510499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.553 [2024-12-09 05:30:09.510570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.553 [2024-12-09 05:30:09.510584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.553 [2024-12-09 05:30:09.510595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.553 [2024-12-09 05:30:09.510604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.553 [2024-12-09 05:30:09.511159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.553 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:15.812 [2024-12-09 05:30:09.900633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:15.812 ************************************ 00:35:15.812 START TEST lvs_grow_clean 00:35:15.812 ************************************ 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:15.812 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:16.070 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:16.070 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:16.328 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:16.328 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:16.328 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:16.586 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:16.586 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:16.586 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a lvol 150 00:35:17.153 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 00:35:17.153 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:17.153 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:17.153 [2024-12-09 05:30:11.331750] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:17.153 [2024-12-09 05:30:11.331833] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:17.153 true 00:35:17.153 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:17.153 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:17.412 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:17.412 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:17.977 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 00:35:17.978 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.235 [2024-12-09 05:30:12.423087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.235 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=555826 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 555826 /var/tmp/bdevperf.sock 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 555826 ']' 00:35:18.493 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.494 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.752 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.752 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.752 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.752 [2024-12-09 05:30:12.761516] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:18.752 [2024-12-09 05:30:12.761615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555826 ] 00:35:18.752 [2024-12-09 05:30:12.828150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.752 [2024-12-09 05:30:12.887106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.009 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.009 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:35:19.009 05:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:19.267 Nvme0n1 00:35:19.267 05:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:19.524 [ 00:35:19.524 { 00:35:19.524 "name": "Nvme0n1", 00:35:19.524 "aliases": [ 00:35:19.524 "1cdea427-ce5d-4374-a50d-d7fbfbcda6d2" 00:35:19.524 ], 00:35:19.524 "product_name": "NVMe disk", 00:35:19.524 "block_size": 4096, 00:35:19.524 "num_blocks": 38912, 00:35:19.524 "uuid": "1cdea427-ce5d-4374-a50d-d7fbfbcda6d2", 00:35:19.524 "numa_id": 0, 00:35:19.524 "assigned_rate_limits": { 00:35:19.524 "rw_ios_per_sec": 0, 00:35:19.524 "rw_mbytes_per_sec": 0, 00:35:19.524 "r_mbytes_per_sec": 0, 00:35:19.524 "w_mbytes_per_sec": 0 00:35:19.524 }, 00:35:19.524 "claimed": false, 00:35:19.524 "zoned": false, 00:35:19.524 "supported_io_types": { 00:35:19.524 "read": true, 00:35:19.524 "write": true, 00:35:19.524 "unmap": true, 00:35:19.524 "flush": true, 00:35:19.524 "reset": true, 00:35:19.524 "nvme_admin": true, 00:35:19.524 "nvme_io": true, 00:35:19.524 "nvme_io_md": false, 00:35:19.524 "write_zeroes": true, 00:35:19.524 "zcopy": false, 00:35:19.524 "get_zone_info": false, 00:35:19.524 "zone_management": false, 00:35:19.524 "zone_append": false, 00:35:19.524 "compare": true, 00:35:19.524 "compare_and_write": true, 00:35:19.524 "abort": true, 00:35:19.524 "seek_hole": false, 00:35:19.524 "seek_data": false, 00:35:19.524 "copy": true, 00:35:19.524 "nvme_iov_md": false 00:35:19.524 }, 00:35:19.524 "memory_domains": [ 00:35:19.524 { 00:35:19.524 "dma_device_id": "system", 00:35:19.524 "dma_device_type": 1 00:35:19.524 } 00:35:19.524 ], 00:35:19.524 "driver_specific": { 00:35:19.524 "nvme": [ 00:35:19.524 { 00:35:19.524 "trid": { 00:35:19.524 "trtype": "TCP", 00:35:19.524 "adrfam": "IPv4", 00:35:19.524 "traddr": "10.0.0.2", 00:35:19.524 "trsvcid": "4420", 00:35:19.524 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:19.524 }, 00:35:19.524 "ctrlr_data": { 00:35:19.524 "cntlid": 1, 00:35:19.524 "vendor_id": "0x8086", 00:35:19.524 "model_number": "SPDK bdev Controller", 00:35:19.524 "serial_number": "SPDK0", 00:35:19.524 "firmware_revision": "25.01", 00:35:19.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.524 "oacs": { 00:35:19.524 "security": 0, 00:35:19.524 "format": 0, 00:35:19.524 "firmware": 0, 00:35:19.524 "ns_manage": 0 00:35:19.524 }, 00:35:19.524 "multi_ctrlr": true, 00:35:19.524 "ana_reporting": false 00:35:19.524 }, 00:35:19.524 "vs": { 00:35:19.524 "nvme_version": "1.3" 00:35:19.524 }, 00:35:19.524 "ns_data": { 00:35:19.524 "id": 1, 00:35:19.524 "can_share": true 00:35:19.524 } 00:35:19.524 } 00:35:19.524 ], 00:35:19.524 "mp_policy": "active_passive" 00:35:19.524 } 00:35:19.524 } 00:35:19.524 ] 00:35:19.524 05:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=555955 00:35:19.524 05:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:19.524 05:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:19.781 Running I/O for 10 seconds... 00:35:20.715 Latency(us) 00:35:20.715 [2024-12-09T04:30:14.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.715 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:35:20.715 [2024-12-09T04:30:14.940Z] =================================================================================================================== 00:35:20.715 [2024-12-09T04:30:14.940Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:35:20.715 00:35:21.648 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:21.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.648 Nvme0n1 : 2.00 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:35:21.648 [2024-12-09T04:30:15.873Z] =================================================================================================================== 00:35:21.648 [2024-12-09T04:30:15.873Z] Total : 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:35:21.648 00:35:21.906 true 00:35:21.906 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:21.906 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:22.163 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:22.163 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:22.163 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 555955 00:35:22.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.729 Nvme0n1 : 3.00 15198.00 59.37 0.00 0.00 0.00 0.00 0.00 00:35:22.729 [2024-12-09T04:30:16.954Z] =================================================================================================================== 00:35:22.729 [2024-12-09T04:30:16.954Z] Total : 15198.00 59.37 0.00 0.00 0.00 0.00 0.00 00:35:22.729 00:35:23.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.661 Nvme0n1 : 4.00 15303.75 59.78 0.00 0.00 0.00 0.00 0.00 00:35:23.661 [2024-12-09T04:30:17.886Z] =================================================================================================================== 00:35:23.661 [2024-12-09T04:30:17.886Z] Total : 15303.75 59.78 0.00 0.00 0.00 0.00 0.00 00:35:23.661 00:35:25.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.032 Nvme0n1 : 5.00 15367.20 60.03 0.00 0.00 0.00 0.00 0.00 00:35:25.032 [2024-12-09T04:30:19.257Z] =================================================================================================================== 00:35:25.032 [2024-12-09T04:30:19.257Z] Total : 15367.20 60.03 0.00 0.00 0.00 0.00 0.00 00:35:25.032 00:35:25.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:25.965 Nvme0n1 : 6.00 15436.33 60.30 0.00 0.00 0.00 0.00 0.00 00:35:25.965 [2024-12-09T04:30:20.190Z] =================================================================================================================== 00:35:25.965 [2024-12-09T04:30:20.191Z] Total : 15436.33 60.30 0.00 0.00 0.00 0.00 0.00 00:35:25.966 00:35:26.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:26.899 Nvme0n1 : 7.00 15480.86 60.47 0.00 0.00 0.00 0.00 0.00 00:35:26.899 [2024-12-09T04:30:21.124Z] =================================================================================================================== 00:35:26.899 [2024-12-09T04:30:21.124Z] Total : 15480.86 60.47 0.00 0.00 0.00 0.00 0.00 00:35:26.899 00:35:27.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:27.850 Nvme0n1 : 8.00 15530.12 60.66 0.00 0.00 0.00 0.00 0.00 00:35:27.850 [2024-12-09T04:30:22.075Z] =================================================================================================================== 00:35:27.850 [2024-12-09T04:30:22.075Z] Total : 15530.12 60.66 0.00 0.00 0.00 0.00 0.00 00:35:27.850 00:35:28.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.783 Nvme0n1 : 9.00 15568.44 60.81 0.00 0.00 0.00 0.00 0.00 00:35:28.783 [2024-12-09T04:30:23.008Z] =================================================================================================================== 00:35:28.783 [2024-12-09T04:30:23.008Z] Total : 15568.44 60.81 0.00 0.00 0.00 0.00 0.00 00:35:28.783 00:35:29.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.716 Nvme0n1 : 10.00 15596.20 60.92 0.00 0.00 0.00 0.00 0.00 00:35:29.716 [2024-12-09T04:30:23.941Z] =================================================================================================================== 00:35:29.716 [2024-12-09T04:30:23.941Z] Total : 15596.20 60.92 0.00 0.00 0.00 0.00 0.00 00:35:29.716 00:35:29.716 00:35:29.716 Latency(us) 00:35:29.716 [2024-12-09T04:30:23.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:29.716 Nvme0n1 : 10.01 15590.00 60.90 0.00 0.00 8204.78 4441.88 17087.91 00:35:29.716 [2024-12-09T04:30:23.941Z] =================================================================================================================== 00:35:29.716 [2024-12-09T04:30:23.941Z] Total : 15590.00 60.90 0.00 0.00 8204.78 4441.88 17087.91 00:35:29.716 { 00:35:29.716 "results": [ 00:35:29.716 { 00:35:29.716 "job": "Nvme0n1", 00:35:29.716 "core_mask": "0x2", 00:35:29.716 "workload": "randwrite", 00:35:29.716 "status": "finished", 00:35:29.716 "queue_depth": 128, 00:35:29.716 "io_size": 4096, 00:35:29.716 "runtime": 10.005904, 00:35:29.716 "iops": 15589.995666558463, 00:35:29.716 "mibps": 60.898420572494, 00:35:29.716 "io_failed": 0, 00:35:29.716 "io_timeout": 0, 00:35:29.716 "avg_latency_us": 8204.783054021764, 00:35:29.716 "min_latency_us": 4441.884444444445, 00:35:29.716 "max_latency_us": 17087.905185185184 00:35:29.716 } 00:35:29.716 ], 00:35:29.716 "core_count": 1 00:35:29.716 } 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 555826 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 555826 ']' 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 555826 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 555826 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:29.716 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:29.717 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 555826' 00:35:29.717 killing process with pid 555826 00:35:29.717 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 555826 00:35:29.717 Received shutdown signal, test time was about 10.000000 seconds 00:35:29.717 00:35:29.717 Latency(us) 00:35:29.717 [2024-12-09T04:30:23.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.717 [2024-12-09T04:30:23.942Z] =================================================================================================================== 00:35:29.717 [2024-12-09T04:30:23.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:29.717 05:30:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 555826 00:35:29.975 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:30.540 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.540 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:30.540 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:30.798 05:30:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:30.798 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:30.798 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:31.056 [2024-12-09 05:30:25.254149] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:31.313 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:31.571 request: 00:35:31.571 { 00:35:31.571 "uuid": "f73a3d6b-2ba1-43af-b12b-27ccdb42272a", 00:35:31.571 "method": "bdev_lvol_get_lvstores", 00:35:31.571 "req_id": 1 00:35:31.571 } 00:35:31.571 Got JSON-RPC error response 00:35:31.571 response: 00:35:31.571 { 00:35:31.571 "code": -19, 00:35:31.571 "message": "No such device" 00:35:31.571 } 00:35:31.571 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:31.571 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:31.571 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:31.571 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:31.571 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:31.829 aio_bdev 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:31.829 05:30:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:32.086 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 -t 2000 00:35:32.358 [ 00:35:32.358 { 00:35:32.358 "name": "1cdea427-ce5d-4374-a50d-d7fbfbcda6d2", 00:35:32.358 "aliases": [ 00:35:32.358 "lvs/lvol" 00:35:32.358 ], 00:35:32.358 "product_name": "Logical Volume", 00:35:32.358 "block_size": 4096, 00:35:32.358 "num_blocks": 38912, 00:35:32.358 "uuid": "1cdea427-ce5d-4374-a50d-d7fbfbcda6d2", 00:35:32.358 "assigned_rate_limits": { 00:35:32.358 "rw_ios_per_sec": 0, 00:35:32.358 "rw_mbytes_per_sec": 0, 00:35:32.358 "r_mbytes_per_sec": 0, 00:35:32.358 "w_mbytes_per_sec": 0 00:35:32.358 }, 00:35:32.358 "claimed": false, 00:35:32.358 "zoned": false, 00:35:32.358 "supported_io_types": { 00:35:32.358 "read": true, 00:35:32.358 "write": true, 00:35:32.358 "unmap": true, 00:35:32.358 "flush": false, 00:35:32.358 "reset": true, 00:35:32.358 "nvme_admin": false, 00:35:32.358 "nvme_io": false, 00:35:32.358 "nvme_io_md": false, 00:35:32.358 "write_zeroes": true, 00:35:32.358 "zcopy": false, 00:35:32.358 "get_zone_info": false, 00:35:32.358 "zone_management": false, 00:35:32.358 "zone_append": false, 00:35:32.358 "compare": false, 00:35:32.358 "compare_and_write": false, 00:35:32.358 "abort": false, 00:35:32.358 "seek_hole": true, 00:35:32.358 "seek_data": true, 00:35:32.358 "copy": false, 00:35:32.358 "nvme_iov_md": false 00:35:32.358 }, 00:35:32.359 "driver_specific": { 00:35:32.359 "lvol": { 00:35:32.359 "lvol_store_uuid": "f73a3d6b-2ba1-43af-b12b-27ccdb42272a", 00:35:32.359 "base_bdev": "aio_bdev", 00:35:32.359 "thin_provision": false, 00:35:32.359 "num_allocated_clusters": 38, 00:35:32.359 "snapshot": false, 00:35:32.359 "clone": false, 00:35:32.359 "esnap_clone": false 00:35:32.359 } 00:35:32.359 } 00:35:32.359 } 00:35:32.359 ] 00:35:32.359 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:32.359 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:32.359 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:32.699 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:32.699 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:32.699 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:33.033 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:33.033 05:30:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1cdea427-ce5d-4374-a50d-d7fbfbcda6d2 00:35:33.291 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f73a3d6b-2ba1-43af-b12b-27ccdb42272a 00:35:33.548 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:33.806 00:35:33.806 real 0m17.868s 00:35:33.806 user 0m17.423s 00:35:33.806 sys 0m1.837s 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:33.806 ************************************ 00:35:33.806 END TEST lvs_grow_clean 00:35:33.806 ************************************ 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:33.806 ************************************ 00:35:33.806 START TEST lvs_grow_dirty 00:35:33.806 ************************************ 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:33.806 05:30:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:34.064 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:34.064 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:34.321 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:34.322 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:34.322 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:34.579 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:34.579 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:34.579 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc lvol 150 00:35:34.836 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:34.836 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:34.836 05:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:35.093 [2024-12-09 05:30:29.232671] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:35.093 [2024-12-09 05:30:29.232761] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:35.093 true 00:35:35.093 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:35.093 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:35.351 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:35.351 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:35.608 05:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:35.866 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.123 [2024-12-09 05:30:30.328034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.123 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=558021 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 558021 /var/tmp/bdevperf.sock 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 558021 ']' 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:36.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:36.688 [2024-12-09 05:30:30.654840] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:36.688 [2024-12-09 05:30:30.654908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558021 ] 00:35:36.688 [2024-12-09 05:30:30.719779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.688 [2024-12-09 05:30:30.775911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:36.688 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:37.254 Nvme0n1 00:35:37.254 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:37.254 [ 00:35:37.254 { 00:35:37.254 "name": "Nvme0n1", 00:35:37.254 "aliases": [ 00:35:37.254 "e8266918-6ee8-45c1-896a-17ee3b34e9ba" 00:35:37.254 ], 00:35:37.254 "product_name": "NVMe disk", 00:35:37.254 "block_size": 4096, 00:35:37.254 "num_blocks": 38912, 00:35:37.254 "uuid": "e8266918-6ee8-45c1-896a-17ee3b34e9ba", 00:35:37.254 "numa_id": 0, 00:35:37.254 "assigned_rate_limits": { 00:35:37.254 "rw_ios_per_sec": 0, 00:35:37.254 "rw_mbytes_per_sec": 0, 00:35:37.254 "r_mbytes_per_sec": 0, 00:35:37.254 "w_mbytes_per_sec": 0 00:35:37.254 }, 00:35:37.254 "claimed": false, 00:35:37.254 "zoned": false, 00:35:37.254 "supported_io_types": { 00:35:37.254 "read": true, 00:35:37.254 "write": true, 00:35:37.254 "unmap": true, 00:35:37.254 "flush": true, 00:35:37.254 "reset": true, 00:35:37.254 "nvme_admin": true, 00:35:37.254 "nvme_io": true, 00:35:37.254 "nvme_io_md": false, 00:35:37.254 "write_zeroes": true, 00:35:37.254 "zcopy": false, 00:35:37.254 "get_zone_info": false, 00:35:37.254 "zone_management": false, 00:35:37.254 "zone_append": false, 00:35:37.254 "compare": true, 00:35:37.254 "compare_and_write": true, 00:35:37.254 "abort": true, 00:35:37.254 "seek_hole": false, 00:35:37.254 "seek_data": false, 00:35:37.254 "copy": true, 00:35:37.254 "nvme_iov_md": false 00:35:37.254 }, 00:35:37.254 "memory_domains": [ 00:35:37.254 { 00:35:37.254 "dma_device_id": "system", 00:35:37.254 "dma_device_type": 1 00:35:37.254 } 00:35:37.254 ], 00:35:37.254 "driver_specific": { 00:35:37.254 "nvme": [ 00:35:37.254 { 00:35:37.254 "trid": { 00:35:37.254 "trtype": "TCP", 00:35:37.254 "adrfam": "IPv4", 00:35:37.254 "traddr": "10.0.0.2", 00:35:37.254 "trsvcid": "4420", 00:35:37.254 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:37.254 }, 00:35:37.254 "ctrlr_data": { 00:35:37.254 "cntlid": 1, 00:35:37.254 "vendor_id": "0x8086", 00:35:37.254 "model_number": "SPDK bdev Controller", 00:35:37.254 "serial_number": "SPDK0", 00:35:37.254 "firmware_revision": "25.01", 00:35:37.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.254 "oacs": { 00:35:37.254 "security": 0, 00:35:37.254 "format": 0, 00:35:37.254 "firmware": 0, 00:35:37.254 "ns_manage": 0 00:35:37.254 }, 00:35:37.254 "multi_ctrlr": true, 00:35:37.254 "ana_reporting": false 00:35:37.254 }, 00:35:37.254 "vs": { 00:35:37.254 "nvme_version": "1.3" 00:35:37.254 }, 00:35:37.254 "ns_data": { 00:35:37.254 "id": 1, 00:35:37.254 "can_share": true 00:35:37.254 } 00:35:37.254 } 00:35:37.254 ], 00:35:37.255 "mp_policy": "active_passive" 00:35:37.255 } 00:35:37.255 } 00:35:37.255 ] 00:35:37.513 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=558152 00:35:37.513 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:37.513 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:37.513 Running I/O for 10 seconds... 00:35:38.448 Latency(us) 00:35:38.448 [2024-12-09T04:30:32.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:38.448 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:35:38.448 [2024-12-09T04:30:32.673Z] =================================================================================================================== 00:35:38.448 [2024-12-09T04:30:32.673Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:35:38.448 00:35:39.382 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:39.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:39.382 Nvme0n1 : 2.00 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:35:39.382 [2024-12-09T04:30:33.607Z] =================================================================================================================== 00:35:39.382 [2024-12-09T04:30:33.607Z] Total : 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:35:39.382 00:35:39.640 true 00:35:39.640 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:39.640 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:39.898 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:39.898 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:39.898 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 558152 00:35:40.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:40.463 Nvme0n1 : 3.00 15282.33 59.70 0.00 0.00 0.00 0.00 0.00 00:35:40.463 [2024-12-09T04:30:34.688Z] =================================================================================================================== 00:35:40.463 [2024-12-09T04:30:34.688Z] Total : 15282.33 59.70 0.00 0.00 0.00 0.00 0.00 00:35:40.463 00:35:41.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:41.396 Nvme0n1 : 4.00 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:35:41.396 [2024-12-09T04:30:35.621Z] =================================================================================================================== 00:35:41.396 [2024-12-09T04:30:35.621Z] Total : 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:35:41.396 00:35:42.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:42.769 Nvme0n1 : 5.00 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:35:42.769 [2024-12-09T04:30:36.994Z] =================================================================================================================== 00:35:42.769 [2024-12-09T04:30:36.994Z] Total : 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:35:42.769 00:35:43.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:43.704 Nvme0n1 : 6.00 15515.17 60.61 0.00 0.00 0.00 0.00 0.00 00:35:43.704 [2024-12-09T04:30:37.929Z] =================================================================================================================== 00:35:43.704 [2024-12-09T04:30:37.929Z] Total : 15515.17 60.61 0.00 0.00 0.00 0.00 0.00 00:35:43.704 00:35:44.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:44.637 Nvme0n1 : 7.00 15548.43 60.74 0.00 0.00 0.00 0.00 0.00 00:35:44.637 [2024-12-09T04:30:38.862Z] =================================================================================================================== 00:35:44.637 [2024-12-09T04:30:38.862Z] Total : 15548.43 60.74 0.00 0.00 0.00 0.00 0.00 00:35:44.637 00:35:45.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:45.571 Nvme0n1 : 8.00 15589.25 60.90 0.00 0.00 0.00 0.00 0.00 00:35:45.571 [2024-12-09T04:30:39.796Z] =================================================================================================================== 00:35:45.571 [2024-12-09T04:30:39.796Z] Total : 15589.25 60.90 0.00 0.00 0.00 0.00 0.00 00:35:45.571 00:35:46.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:46.500 Nvme0n1 : 9.00 15587.89 60.89 0.00 0.00 0.00 0.00 0.00 00:35:46.500 [2024-12-09T04:30:40.725Z] =================================================================================================================== 00:35:46.500 [2024-12-09T04:30:40.725Z] Total : 15587.89 60.89 0.00 0.00 0.00 0.00 0.00 00:35:46.500 00:35:47.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:47.430 Nvme0n1 : 10.00 15620.00 61.02 0.00 0.00 0.00 0.00 0.00 00:35:47.430 [2024-12-09T04:30:41.655Z] =================================================================================================================== 00:35:47.430 [2024-12-09T04:30:41.655Z] Total : 15620.00 61.02 0.00 0.00 0.00 0.00 0.00 00:35:47.430 00:35:47.430 00:35:47.430 Latency(us) 00:35:47.430 [2024-12-09T04:30:41.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:47.430 Nvme0n1 : 10.01 15612.85 60.99 0.00 0.00 8191.74 3543.80 22524.97 00:35:47.430 [2024-12-09T04:30:41.655Z] =================================================================================================================== 00:35:47.430 [2024-12-09T04:30:41.655Z] Total : 15612.85 60.99 0.00 0.00 8191.74 3543.80 22524.97 00:35:47.430 { 00:35:47.430 "results": [ 00:35:47.430 { 00:35:47.430 "job": "Nvme0n1", 00:35:47.430 "core_mask": "0x2", 00:35:47.430 "workload": "randwrite", 00:35:47.430 "status": "finished", 00:35:47.430 "queue_depth": 128, 00:35:47.430 "io_size": 4096, 00:35:47.430 "runtime": 10.006568, 00:35:47.430 "iops": 15612.845483086709, 00:35:47.430 "mibps": 60.987677668307455, 00:35:47.430 "io_failed": 0, 00:35:47.430 "io_timeout": 0, 00:35:47.430 "avg_latency_us": 8191.740154571686, 00:35:47.430 "min_latency_us": 3543.7985185185184, 00:35:47.430 "max_latency_us": 22524.965925925924 00:35:47.430 } 00:35:47.430 ], 00:35:47.430 "core_count": 1 00:35:47.430 } 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 558021 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 558021 ']' 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 558021 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:47.430 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558021 00:35:47.686 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:47.687 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:47.687 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558021' 00:35:47.687 killing process with pid 558021 00:35:47.687 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 558021 00:35:47.687 Received shutdown signal, test time was about 10.000000 seconds 00:35:47.687 00:35:47.687 Latency(us) 00:35:47.687 [2024-12-09T04:30:41.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.687 [2024-12-09T04:30:41.912Z] =================================================================================================================== 00:35:47.687 [2024-12-09T04:30:41.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:47.687 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 558021 00:35:47.943 05:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:48.199 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.456 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:48.456 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 555383 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 555383 00:35:48.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 555383 Killed "${NVMF_APP[@]}" "$@" 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=559494 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 559494 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 559494 ']' 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.713 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:48.713 [2024-12-09 05:30:42.858396] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:48.713 [2024-12-09 05:30:42.858492] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.713 [2024-12-09 05:30:42.931047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.971 [2024-12-09 05:30:42.990516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.971 [2024-12-09 05:30:42.990580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.971 [2024-12-09 05:30:42.990593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.971 [2024-12-09 05:30:42.990604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.971 [2024-12-09 05:30:42.990613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.971 [2024-12-09 05:30:42.991185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.971 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:49.228 [2024-12-09 05:30:43.382722] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:49.228 [2024-12-09 05:30:43.382863] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:49.228 [2024-12-09 05:30:43.382913] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.228 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:49.486 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8266918-6ee8-45c1-896a-17ee3b34e9ba -t 2000 00:35:49.744 [ 00:35:49.744 { 00:35:49.744 "name": "e8266918-6ee8-45c1-896a-17ee3b34e9ba", 00:35:49.744 "aliases": [ 00:35:49.744 "lvs/lvol" 00:35:49.744 ], 00:35:49.744 "product_name": "Logical Volume", 00:35:49.744 "block_size": 4096, 00:35:49.744 "num_blocks": 38912, 00:35:49.744 "uuid": "e8266918-6ee8-45c1-896a-17ee3b34e9ba", 00:35:49.744 "assigned_rate_limits": { 00:35:49.744 "rw_ios_per_sec": 0, 00:35:49.744 "rw_mbytes_per_sec": 0, 00:35:49.744 "r_mbytes_per_sec": 0, 00:35:49.744 "w_mbytes_per_sec": 0 00:35:49.744 }, 00:35:49.744 "claimed": false, 00:35:49.744 "zoned": false, 00:35:49.744 "supported_io_types": { 00:35:49.744 "read": true, 00:35:49.744 "write": true, 00:35:49.744 "unmap": true, 00:35:49.744 "flush": false, 00:35:49.744 "reset": true, 00:35:49.744 "nvme_admin": false, 00:35:49.744 "nvme_io": false, 00:35:49.744 "nvme_io_md": false, 00:35:49.744 "write_zeroes": true, 00:35:49.744 "zcopy": false, 00:35:49.744 "get_zone_info": false, 00:35:49.744 "zone_management": false, 00:35:49.744 "zone_append": false, 00:35:49.744 "compare": false, 00:35:49.744 "compare_and_write": false, 00:35:49.744 "abort": false, 00:35:49.744 "seek_hole": true, 00:35:49.744 "seek_data": true, 00:35:49.744 "copy": false, 00:35:49.744 "nvme_iov_md": false 00:35:49.744 }, 00:35:49.744 "driver_specific": { 00:35:49.744 "lvol": { 00:35:49.744 "lvol_store_uuid": "2c2ef427-8273-49dd-9824-29f0f9a90bfc", 00:35:49.744 "base_bdev": "aio_bdev", 00:35:49.744 "thin_provision": false, 00:35:49.744 "num_allocated_clusters": 38, 00:35:49.744 "snapshot": false, 00:35:49.744 "clone": false, 00:35:49.744 "esnap_clone": false 00:35:49.744 } 00:35:49.744 } 00:35:49.744 } 00:35:49.744 ] 00:35:49.744 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:49.744 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:49.744 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:50.002 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:50.002 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:50.002 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:50.568 [2024-12-09 05:30:44.740190] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:50.568 05:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:50.826 request: 00:35:50.826 { 00:35:50.826 "uuid": "2c2ef427-8273-49dd-9824-29f0f9a90bfc", 00:35:50.826 "method": "bdev_lvol_get_lvstores", 00:35:50.826 "req_id": 1 00:35:50.826 } 00:35:50.826 Got JSON-RPC error response 00:35:50.826 response: 00:35:50.826 { 00:35:50.826 "code": -19, 00:35:50.826 "message": "No such device" 00:35:50.826 } 00:35:51.083 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:51.083 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:51.083 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:51.083 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:51.083 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:51.340 aio_bdev 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:51.340 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:51.597 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8266918-6ee8-45c1-896a-17ee3b34e9ba -t 2000 00:35:51.855 [ 00:35:51.855 { 00:35:51.855 "name": "e8266918-6ee8-45c1-896a-17ee3b34e9ba", 00:35:51.855 "aliases": [ 00:35:51.855 "lvs/lvol" 00:35:51.855 ], 00:35:51.855 "product_name": "Logical Volume", 00:35:51.855 "block_size": 4096, 00:35:51.855 "num_blocks": 38912, 00:35:51.855 "uuid": "e8266918-6ee8-45c1-896a-17ee3b34e9ba", 00:35:51.855 "assigned_rate_limits": { 00:35:51.855 "rw_ios_per_sec": 0, 00:35:51.855 "rw_mbytes_per_sec": 0, 00:35:51.855 "r_mbytes_per_sec": 0, 00:35:51.855 "w_mbytes_per_sec": 0 00:35:51.855 }, 00:35:51.855 "claimed": false, 00:35:51.855 "zoned": false, 00:35:51.855 "supported_io_types": { 00:35:51.855 "read": true, 00:35:51.855 "write": true, 00:35:51.855 "unmap": true, 00:35:51.855 "flush": false, 00:35:51.855 "reset": true, 00:35:51.855 "nvme_admin": false, 00:35:51.855 "nvme_io": false, 00:35:51.855 "nvme_io_md": false, 00:35:51.855 "write_zeroes": true, 00:35:51.855 "zcopy": false, 00:35:51.855 "get_zone_info": false, 00:35:51.855 "zone_management": false, 00:35:51.855 "zone_append": false, 00:35:51.855 "compare": false, 00:35:51.855 "compare_and_write": false, 00:35:51.855 "abort": false, 00:35:51.855 "seek_hole": true, 00:35:51.855 "seek_data": true, 00:35:51.855 "copy": false, 00:35:51.855 "nvme_iov_md": false 00:35:51.855 }, 00:35:51.855 "driver_specific": { 00:35:51.855 "lvol": { 00:35:51.855 "lvol_store_uuid": "2c2ef427-8273-49dd-9824-29f0f9a90bfc", 00:35:51.855 "base_bdev": "aio_bdev", 00:35:51.855 "thin_provision": false, 00:35:51.855 "num_allocated_clusters": 38, 00:35:51.855 "snapshot": false, 00:35:51.855 "clone": false, 00:35:51.855 "esnap_clone": false 00:35:51.855 } 00:35:51.855 } 00:35:51.855 } 00:35:51.855 ] 00:35:51.855 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:51.855 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:51.855 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:52.112 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:52.113 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:52.113 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:52.370 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:52.370 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8266918-6ee8-45c1-896a-17ee3b34e9ba 00:35:52.628 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c2ef427-8273-49dd-9824-29f0f9a90bfc 00:35:52.885 05:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:53.143 00:35:53.143 real 0m19.376s 00:35:53.143 user 0m49.323s 00:35:53.143 sys 0m4.459s 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:53.143 ************************************ 00:35:53.143 END TEST lvs_grow_dirty 00:35:53.143 ************************************ 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:53.143 nvmf_trace.0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.143 rmmod nvme_tcp 00:35:53.143 rmmod nvme_fabrics 00:35:53.143 rmmod nvme_keyring 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 559494 ']' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 559494 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 559494 ']' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 559494 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.143 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559494 00:35:53.402 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.402 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.402 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559494' 00:35:53.402 killing process with pid 559494 00:35:53.402 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 559494 00:35:53.402 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 559494 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:53.661 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.662 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.565 00:35:55.565 real 0m42.777s 00:35:55.565 user 1m12.752s 00:35:55.565 sys 0m8.308s 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:55.565 ************************************ 00:35:55.565 END TEST nvmf_lvs_grow 00:35:55.565 ************************************ 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.565 05:30:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:35:55.565 ************************************ 00:35:55.565 START TEST nvmf_bdev_io_wait 00:35:55.565 ************************************ 00:35:55.566 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:35:55.566 * Looking for test storage... 00:35:55.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.826 --rc genhtml_branch_coverage=1 00:35:55.826 --rc genhtml_function_coverage=1 00:35:55.826 --rc genhtml_legend=1 00:35:55.826 --rc geninfo_all_blocks=1 00:35:55.826 --rc geninfo_unexecuted_blocks=1 00:35:55.826 00:35:55.826 ' 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.826 --rc genhtml_branch_coverage=1 00:35:55.826 --rc genhtml_function_coverage=1 00:35:55.826 --rc genhtml_legend=1 00:35:55.826 --rc geninfo_all_blocks=1 00:35:55.826 --rc geninfo_unexecuted_blocks=1 00:35:55.826 00:35:55.826 ' 00:35:55.826 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.827 --rc genhtml_branch_coverage=1 00:35:55.827 --rc genhtml_function_coverage=1 00:35:55.827 --rc genhtml_legend=1 00:35:55.827 --rc geninfo_all_blocks=1 00:35:55.827 --rc geninfo_unexecuted_blocks=1 00:35:55.827 00:35:55.827 ' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:55.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.827 --rc genhtml_branch_coverage=1 00:35:55.827 --rc genhtml_function_coverage=1 00:35:55.827 --rc genhtml_legend=1 00:35:55.827 --rc geninfo_all_blocks=1 00:35:55.827 --rc geninfo_unexecuted_blocks=1 00:35:55.827 00:35:55.827 ' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:55.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.827 05:30:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.372 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:58.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:58.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:58.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:58.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.373 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:35:58.374 00:35:58.374 --- 10.0.0.2 ping statistics --- 00:35:58.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.374 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:35:58.374 00:35:58.374 --- 10.0.0.1 ping statistics --- 00:35:58.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.374 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=562033 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 562033 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 562033 ']' 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.374 [2024-12-09 05:30:52.297511] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:58.374 [2024-12-09 05:30:52.297605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.374 [2024-12-09 05:30:52.371178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:58.374 [2024-12-09 05:30:52.432937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.374 [2024-12-09 05:30:52.432990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.374 [2024-12-09 05:30:52.433019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.374 [2024-12-09 05:30:52.433031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.374 [2024-12-09 05:30:52.433042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.374 [2024-12-09 05:30:52.434713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.374 [2024-12-09 05:30:52.434775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:58.374 [2024-12-09 05:30:52.434857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:58.374 [2024-12-09 05:30:52.434860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.374 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 [2024-12-09 05:30:52.638732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 Malloc0 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:58.633 [2024-12-09 05:30:52.689958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=562195 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=562197 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:58.633 { 00:35:58.633 "params": { 00:35:58.633 "name": "Nvme$subsystem", 00:35:58.633 "trtype": "$TEST_TRANSPORT", 00:35:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.633 "adrfam": "ipv4", 00:35:58.633 "trsvcid": "$NVMF_PORT", 00:35:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.633 "hdgst": ${hdgst:-false}, 00:35:58.633 "ddgst": ${ddgst:-false} 00:35:58.633 }, 00:35:58.633 "method": "bdev_nvme_attach_controller" 00:35:58.633 } 00:35:58.633 EOF 00:35:58.633 )") 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=562199 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:58.633 { 00:35:58.633 "params": { 00:35:58.633 "name": "Nvme$subsystem", 00:35:58.633 "trtype": "$TEST_TRANSPORT", 00:35:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.633 "adrfam": "ipv4", 00:35:58.633 "trsvcid": "$NVMF_PORT", 00:35:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.633 "hdgst": ${hdgst:-false}, 00:35:58.633 "ddgst": ${ddgst:-false} 00:35:58.633 }, 00:35:58.633 "method": "bdev_nvme_attach_controller" 00:35:58.633 } 00:35:58.633 EOF 00:35:58.633 )") 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=562202 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:58.633 { 00:35:58.633 "params": { 00:35:58.633 "name": "Nvme$subsystem", 00:35:58.633 "trtype": "$TEST_TRANSPORT", 00:35:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.633 "adrfam": "ipv4", 00:35:58.633 "trsvcid": "$NVMF_PORT", 00:35:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.633 "hdgst": ${hdgst:-false}, 00:35:58.633 "ddgst": ${ddgst:-false} 00:35:58.633 }, 00:35:58.633 "method": "bdev_nvme_attach_controller" 00:35:58.633 } 00:35:58.633 EOF 00:35:58.633 )") 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:58.633 { 00:35:58.633 "params": { 00:35:58.633 "name": "Nvme$subsystem", 00:35:58.633 "trtype": "$TEST_TRANSPORT", 00:35:58.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.633 "adrfam": "ipv4", 00:35:58.633 "trsvcid": "$NVMF_PORT", 00:35:58.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.633 "hdgst": ${hdgst:-false}, 00:35:58.633 "ddgst": ${ddgst:-false} 00:35:58.633 }, 00:35:58.633 "method": "bdev_nvme_attach_controller" 00:35:58.633 } 00:35:58.633 EOF 00:35:58.633 )") 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 562195 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:58.633 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:58.633 "params": { 00:35:58.633 "name": "Nvme1", 00:35:58.634 "trtype": "tcp", 00:35:58.634 "traddr": "10.0.0.2", 00:35:58.634 "adrfam": "ipv4", 00:35:58.634 "trsvcid": "4420", 00:35:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.634 "hdgst": false, 00:35:58.634 "ddgst": false 00:35:58.634 }, 00:35:58.634 "method": "bdev_nvme_attach_controller" 00:35:58.634 }' 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:58.634 "params": { 00:35:58.634 "name": "Nvme1", 00:35:58.634 "trtype": "tcp", 00:35:58.634 "traddr": "10.0.0.2", 00:35:58.634 "adrfam": "ipv4", 00:35:58.634 "trsvcid": "4420", 00:35:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.634 "hdgst": false, 00:35:58.634 "ddgst": false 00:35:58.634 }, 00:35:58.634 "method": "bdev_nvme_attach_controller" 00:35:58.634 }' 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:58.634 "params": { 00:35:58.634 "name": "Nvme1", 00:35:58.634 "trtype": "tcp", 00:35:58.634 "traddr": "10.0.0.2", 00:35:58.634 "adrfam": "ipv4", 00:35:58.634 "trsvcid": "4420", 00:35:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.634 "hdgst": false, 00:35:58.634 "ddgst": false 00:35:58.634 }, 00:35:58.634 "method": "bdev_nvme_attach_controller" 00:35:58.634 }' 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:58.634 05:30:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:58.634 "params": { 00:35:58.634 "name": "Nvme1", 00:35:58.634 "trtype": "tcp", 00:35:58.634 "traddr": "10.0.0.2", 00:35:58.634 "adrfam": "ipv4", 00:35:58.634 "trsvcid": "4420", 00:35:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.634 "hdgst": false, 00:35:58.634 "ddgst": false 00:35:58.634 }, 00:35:58.634 "method": "bdev_nvme_attach_controller" 00:35:58.634 }' 00:35:58.634 [2024-12-09 05:30:52.739630] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:58.634 [2024-12-09 05:30:52.739630] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:58.634 [2024-12-09 05:30:52.739630] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:58.634 [2024-12-09 05:30:52.739722] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 05:30:52.739722] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 05:30:52.739723] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:58.634 --proc-type=auto ] 00:35:58.634 --proc-type=auto ] 00:35:58.634 [2024-12-09 05:30:52.742085] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:35:58.634 [2024-12-09 05:30:52.742168] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:58.891 [2024-12-09 05:30:52.920986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.891 [2024-12-09 05:30:52.974978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:58.892 [2024-12-09 05:30:53.019841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.892 [2024-12-09 05:30:53.073242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:59.149 [2024-12-09 05:30:53.139719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.149 [2024-12-09 05:30:53.202705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:59.149 [2024-12-09 05:30:53.202811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.149 [2024-12-09 05:30:53.251831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:59.406 Running I/O for 1 seconds... 00:35:59.406 Running I/O for 1 seconds... 00:35:59.406 Running I/O for 1 seconds... 00:35:59.406 Running I/O for 1 seconds... 00:36:00.336 6493.00 IOPS, 25.36 MiB/s 00:36:00.336 Latency(us) 00:36:00.336 [2024-12-09T04:30:54.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.336 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:00.336 Nvme1n1 : 1.02 6522.76 25.48 0.00 0.00 19467.64 6844.87 29321.29 00:36:00.336 [2024-12-09T04:30:54.561Z] =================================================================================================================== 00:36:00.336 [2024-12-09T04:30:54.561Z] Total : 6522.76 25.48 0.00 0.00 19467.64 6844.87 29321.29 00:36:00.336 171504.00 IOPS, 669.94 MiB/s 00:36:00.336 Latency(us) 00:36:00.336 [2024-12-09T04:30:54.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.336 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:00.336 Nvme1n1 : 1.00 171168.29 668.63 0.00 0.00 743.72 312.51 1917.53 00:36:00.336 [2024-12-09T04:30:54.561Z] =================================================================================================================== 00:36:00.336 [2024-12-09T04:30:54.561Z] Total : 171168.29 668.63 0.00 0.00 743.72 312.51 1917.53 00:36:00.336 6336.00 IOPS, 24.75 MiB/s 00:36:00.336 Latency(us) 00:36:00.336 [2024-12-09T04:30:54.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.336 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:00.336 Nvme1n1 : 1.01 6439.79 25.16 0.00 0.00 19814.34 4514.70 40195.41 00:36:00.336 [2024-12-09T04:30:54.561Z] =================================================================================================================== 00:36:00.336 [2024-12-09T04:30:54.561Z] Total : 6439.79 25.16 0.00 0.00 19814.34 4514.70 40195.41 00:36:00.336 8652.00 IOPS, 33.80 MiB/s 00:36:00.336 Latency(us) 00:36:00.336 [2024-12-09T04:30:54.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.337 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:00.337 Nvme1n1 : 1.01 8702.86 34.00 0.00 0.00 14637.37 7670.14 24660.95 00:36:00.337 [2024-12-09T04:30:54.562Z] =================================================================================================================== 00:36:00.337 [2024-12-09T04:30:54.562Z] Total : 8702.86 34.00 0.00 0.00 14637.37 7670.14 24660.95 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 562197 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 562199 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 562202 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:00.594 rmmod nvme_tcp 00:36:00.594 rmmod nvme_fabrics 00:36:00.594 rmmod nvme_keyring 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 562033 ']' 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 562033 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 562033 ']' 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 562033 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.594 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 562033 00:36:00.852 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.852 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.852 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 562033' 00:36:00.852 killing process with pid 562033 00:36:00.852 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 562033 00:36:00.852 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 562033 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.852 05:30:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:03.384 00:36:03.384 real 0m7.376s 00:36:03.384 user 0m16.592s 00:36:03.384 sys 0m3.468s 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:03.384 ************************************ 00:36:03.384 END TEST nvmf_bdev_io_wait 00:36:03.384 ************************************ 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:36:03.384 ************************************ 00:36:03.384 START TEST nvmf_queue_depth 00:36:03.384 ************************************ 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:36:03.384 * Looking for test storage... 00:36:03.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.384 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.385 --rc genhtml_branch_coverage=1 00:36:03.385 --rc genhtml_function_coverage=1 00:36:03.385 --rc genhtml_legend=1 00:36:03.385 --rc geninfo_all_blocks=1 00:36:03.385 --rc geninfo_unexecuted_blocks=1 00:36:03.385 00:36:03.385 ' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.385 --rc genhtml_branch_coverage=1 00:36:03.385 --rc genhtml_function_coverage=1 00:36:03.385 --rc genhtml_legend=1 00:36:03.385 --rc geninfo_all_blocks=1 00:36:03.385 --rc geninfo_unexecuted_blocks=1 00:36:03.385 00:36:03.385 ' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.385 --rc genhtml_branch_coverage=1 00:36:03.385 --rc genhtml_function_coverage=1 00:36:03.385 --rc genhtml_legend=1 00:36:03.385 --rc geninfo_all_blocks=1 00:36:03.385 --rc geninfo_unexecuted_blocks=1 00:36:03.385 00:36:03.385 ' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:03.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.385 --rc genhtml_branch_coverage=1 00:36:03.385 --rc genhtml_function_coverage=1 00:36:03.385 --rc genhtml_legend=1 00:36:03.385 --rc geninfo_all_blocks=1 00:36:03.385 --rc geninfo_unexecuted_blocks=1 00:36:03.385 00:36:03.385 ' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:03.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.385 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.386 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.386 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:03.386 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:03.386 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:03.386 05:30:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:05.290 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:05.290 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.290 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:05.291 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:05.291 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:05.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:36:05.291 00:36:05.291 --- 10.0.0.2 ping statistics --- 00:36:05.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.291 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:36:05.291 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:36:05.550 00:36:05.550 --- 10.0.0.1 ping statistics --- 00:36:05.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.550 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=564434 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 564434 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 564434 ']' 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.550 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.550 [2024-12-09 05:30:59.606978] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:05.550 [2024-12-09 05:30:59.607080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.550 [2024-12-09 05:30:59.684563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.550 [2024-12-09 05:30:59.743002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.550 [2024-12-09 05:30:59.743070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.550 [2024-12-09 05:30:59.743099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.550 [2024-12-09 05:30:59.743111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.550 [2024-12-09 05:30:59.743120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.550 [2024-12-09 05:30:59.743712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.808 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 [2024-12-09 05:30:59.893425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 Malloc0 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 [2024-12-09 05:30:59.941672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=564454 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 564454 /var/tmp/bdevperf.sock 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 564454 ']' 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:05.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.809 05:30:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:05.809 [2024-12-09 05:30:59.987476] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:05.809 [2024-12-09 05:30:59.987538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564454 ] 00:36:06.067 [2024-12-09 05:31:00.059662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.067 [2024-12-09 05:31:00.117865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.067 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.067 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:06.067 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:06.067 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.067 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:06.324 NVMe0n1 00:36:06.324 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.324 05:31:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:06.582 Running I/O for 10 seconds... 00:36:08.449 8192.00 IOPS, 32.00 MiB/s [2024-12-09T04:31:04.046Z] 8322.50 IOPS, 32.51 MiB/s [2024-12-09T04:31:04.979Z] 8357.00 IOPS, 32.64 MiB/s [2024-12-09T04:31:05.910Z] 8443.25 IOPS, 32.98 MiB/s [2024-12-09T04:31:06.843Z] 8417.40 IOPS, 32.88 MiB/s [2024-12-09T04:31:07.772Z] 8493.50 IOPS, 33.18 MiB/s [2024-12-09T04:31:08.706Z] 8479.71 IOPS, 33.12 MiB/s [2024-12-09T04:31:10.080Z] 8491.88 IOPS, 33.17 MiB/s [2024-12-09T04:31:11.013Z] 8522.00 IOPS, 33.29 MiB/s [2024-12-09T04:31:11.013Z] 8515.30 IOPS, 33.26 MiB/s 00:36:16.788 Latency(us) 00:36:16.788 [2024-12-09T04:31:11.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.788 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:36:16.788 Verification LBA range: start 0x0 length 0x4000 00:36:16.788 NVMe0n1 : 10.07 8553.84 33.41 0.00 0.00 119183.60 12233.39 71458.51 00:36:16.788 [2024-12-09T04:31:11.013Z] =================================================================================================================== 00:36:16.788 [2024-12-09T04:31:11.013Z] Total : 8553.84 33.41 0.00 0.00 119183.60 12233.39 71458.51 00:36:16.788 { 00:36:16.788 "results": [ 00:36:16.788 { 00:36:16.788 "job": "NVMe0n1", 00:36:16.788 "core_mask": "0x1", 00:36:16.788 "workload": "verify", 00:36:16.788 "status": "finished", 00:36:16.788 "verify_range": { 00:36:16.788 "start": 0, 00:36:16.788 "length": 16384 00:36:16.788 }, 00:36:16.788 "queue_depth": 1024, 00:36:16.788 "io_size": 4096, 00:36:16.788 "runtime": 10.067292, 00:36:16.788 "iops": 8553.839503214966, 00:36:16.788 "mibps": 33.41343555943346, 00:36:16.788 "io_failed": 0, 00:36:16.788 "io_timeout": 0, 00:36:16.788 "avg_latency_us": 119183.5956662443, 00:36:16.788 "min_latency_us": 12233.386666666667, 00:36:16.788 "max_latency_us": 71458.5125925926 00:36:16.788 } 00:36:16.788 ], 00:36:16.788 "core_count": 1 00:36:16.788 } 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 564454 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 564454 ']' 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 564454 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564454 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564454' 00:36:16.788 killing process with pid 564454 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 564454 00:36:16.788 Received shutdown signal, test time was about 10.000000 seconds 00:36:16.788 00:36:16.788 Latency(us) 00:36:16.788 [2024-12-09T04:31:11.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.788 [2024-12-09T04:31:11.013Z] =================================================================================================================== 00:36:16.788 [2024-12-09T04:31:11.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:16.788 05:31:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 564454 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:17.046 rmmod nvme_tcp 00:36:17.046 rmmod nvme_fabrics 00:36:17.046 rmmod nvme_keyring 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 564434 ']' 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 564434 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 564434 ']' 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 564434 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564434 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564434' 00:36:17.046 killing process with pid 564434 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 564434 00:36:17.046 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 564434 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:17.304 05:31:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:19.837 00:36:19.837 real 0m16.303s 00:36:19.837 user 0m23.010s 00:36:19.837 sys 0m3.062s 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:19.837 ************************************ 00:36:19.837 END TEST nvmf_queue_depth 00:36:19.837 ************************************ 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:36:19.837 ************************************ 00:36:19.837 START TEST nvmf_target_multipath 00:36:19.837 ************************************ 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:36:19.837 * Looking for test storage... 00:36:19.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:36:19.837 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:19.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.838 --rc genhtml_branch_coverage=1 00:36:19.838 --rc genhtml_function_coverage=1 00:36:19.838 --rc genhtml_legend=1 00:36:19.838 --rc geninfo_all_blocks=1 00:36:19.838 --rc geninfo_unexecuted_blocks=1 00:36:19.838 00:36:19.838 ' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:19.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.838 --rc genhtml_branch_coverage=1 00:36:19.838 --rc genhtml_function_coverage=1 00:36:19.838 --rc genhtml_legend=1 00:36:19.838 --rc geninfo_all_blocks=1 00:36:19.838 --rc geninfo_unexecuted_blocks=1 00:36:19.838 00:36:19.838 ' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:19.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.838 --rc genhtml_branch_coverage=1 00:36:19.838 --rc genhtml_function_coverage=1 00:36:19.838 --rc genhtml_legend=1 00:36:19.838 --rc geninfo_all_blocks=1 00:36:19.838 --rc geninfo_unexecuted_blocks=1 00:36:19.838 00:36:19.838 ' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:19.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:19.838 --rc genhtml_branch_coverage=1 00:36:19.838 --rc genhtml_function_coverage=1 00:36:19.838 --rc genhtml_legend=1 00:36:19.838 --rc geninfo_all_blocks=1 00:36:19.838 --rc geninfo_unexecuted_blocks=1 00:36:19.838 00:36:19.838 ' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:19.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:19.838 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:19.839 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.760 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.761 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:36:22.028 00:36:22.028 --- 10.0.0.2 ping statistics --- 00:36:22.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.028 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:36:22.028 00:36:22.028 --- 10.0.0.1 ping statistics --- 00:36:22.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.028 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:22.028 only one NIC for nvmf test 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:22.028 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:22.028 rmmod nvme_tcp 00:36:22.028 rmmod nvme_fabrics 00:36:22.028 rmmod nvme_keyring 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.029 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.559 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.559 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.560 00:36:24.560 real 0m4.709s 00:36:24.560 user 0m0.992s 00:36:24.560 sys 0m1.678s 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:24.560 ************************************ 00:36:24.560 END TEST nvmf_target_multipath 00:36:24.560 ************************************ 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:36:24.560 ************************************ 00:36:24.560 START TEST nvmf_zcopy 00:36:24.560 ************************************ 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:36:24.560 * Looking for test storage... 00:36:24.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.560 --rc genhtml_branch_coverage=1 00:36:24.560 --rc genhtml_function_coverage=1 00:36:24.560 --rc genhtml_legend=1 00:36:24.560 --rc geninfo_all_blocks=1 00:36:24.560 --rc geninfo_unexecuted_blocks=1 00:36:24.560 00:36:24.560 ' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.560 --rc genhtml_branch_coverage=1 00:36:24.560 --rc genhtml_function_coverage=1 00:36:24.560 --rc genhtml_legend=1 00:36:24.560 --rc geninfo_all_blocks=1 00:36:24.560 --rc geninfo_unexecuted_blocks=1 00:36:24.560 00:36:24.560 ' 00:36:24.560 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.561 --rc genhtml_branch_coverage=1 00:36:24.561 --rc genhtml_function_coverage=1 00:36:24.561 --rc genhtml_legend=1 00:36:24.561 --rc geninfo_all_blocks=1 00:36:24.561 --rc geninfo_unexecuted_blocks=1 00:36:24.561 00:36:24.561 ' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.561 --rc genhtml_branch_coverage=1 00:36:24.561 --rc genhtml_function_coverage=1 00:36:24.561 --rc genhtml_legend=1 00:36:24.561 --rc geninfo_all_blocks=1 00:36:24.561 --rc geninfo_unexecuted_blocks=1 00:36:24.561 00:36:24.561 ' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.561 05:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:26.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:26.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:26.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:26.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:26.462 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:26.463 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:26.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:36:26.720 00:36:26.720 --- 10.0.0.2 ping statistics --- 00:36:26.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.720 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:36:26.720 00:36:26.720 --- 10.0.0.1 ping statistics --- 00:36:26.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.720 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=569664 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 569664 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 569664 ']' 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.720 05:31:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.720 [2024-12-09 05:31:20.804814] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:26.720 [2024-12-09 05:31:20.804917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:26.720 [2024-12-09 05:31:20.882920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.720 [2024-12-09 05:31:20.941332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:26.720 [2024-12-09 05:31:20.941414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:26.720 [2024-12-09 05:31:20.941445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:26.720 [2024-12-09 05:31:20.941458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:26.720 [2024-12-09 05:31:20.941468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:26.720 [2024-12-09 05:31:20.942161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.977 [2024-12-09 05:31:21.086131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.977 [2024-12-09 05:31:21.102377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.977 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.978 malloc0 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:26.978 { 00:36:26.978 "params": { 00:36:26.978 "name": "Nvme$subsystem", 00:36:26.978 "trtype": "$TEST_TRANSPORT", 00:36:26.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:26.978 "adrfam": "ipv4", 00:36:26.978 "trsvcid": "$NVMF_PORT", 00:36:26.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:26.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:26.978 "hdgst": ${hdgst:-false}, 00:36:26.978 "ddgst": ${ddgst:-false} 00:36:26.978 }, 00:36:26.978 "method": "bdev_nvme_attach_controller" 00:36:26.978 } 00:36:26.978 EOF 00:36:26.978 )") 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:26.978 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:26.978 "params": { 00:36:26.978 "name": "Nvme1", 00:36:26.978 "trtype": "tcp", 00:36:26.978 "traddr": "10.0.0.2", 00:36:26.978 "adrfam": "ipv4", 00:36:26.978 "trsvcid": "4420", 00:36:26.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:26.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:26.978 "hdgst": false, 00:36:26.978 "ddgst": false 00:36:26.978 }, 00:36:26.978 "method": "bdev_nvme_attach_controller" 00:36:26.978 }' 00:36:26.978 [2024-12-09 05:31:21.192189] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:26.978 [2024-12-09 05:31:21.192301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569800 ] 00:36:27.235 [2024-12-09 05:31:21.266758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.235 [2024-12-09 05:31:21.324376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.491 Running I/O for 10 seconds... 00:36:29.465 5743.00 IOPS, 44.87 MiB/s [2024-12-09T04:31:24.642Z] 5803.00 IOPS, 45.34 MiB/s [2024-12-09T04:31:25.574Z] 5801.00 IOPS, 45.32 MiB/s [2024-12-09T04:31:26.944Z] 5817.25 IOPS, 45.45 MiB/s [2024-12-09T04:31:27.875Z] 5814.40 IOPS, 45.42 MiB/s [2024-12-09T04:31:28.805Z] 5821.50 IOPS, 45.48 MiB/s [2024-12-09T04:31:29.737Z] 5828.14 IOPS, 45.53 MiB/s [2024-12-09T04:31:30.668Z] 5824.75 IOPS, 45.51 MiB/s [2024-12-09T04:31:31.602Z] 5829.78 IOPS, 45.55 MiB/s [2024-12-09T04:31:31.602Z] 5839.20 IOPS, 45.62 MiB/s 00:36:37.377 Latency(us) 00:36:37.377 [2024-12-09T04:31:31.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.377 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:37.377 Verification LBA range: start 0x0 length 0x1000 00:36:37.377 Nvme1n1 : 10.01 5841.06 45.63 0.00 0.00 21853.88 1371.40 31263.10 00:36:37.377 [2024-12-09T04:31:31.602Z] =================================================================================================================== 00:36:37.377 [2024-12-09T04:31:31.602Z] Total : 5841.06 45.63 0.00 0.00 21853.88 1371.40 31263.10 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=571014 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.635 { 00:36:37.635 "params": { 00:36:37.635 "name": "Nvme$subsystem", 00:36:37.635 "trtype": "$TEST_TRANSPORT", 00:36:37.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.635 "adrfam": "ipv4", 00:36:37.635 "trsvcid": "$NVMF_PORT", 00:36:37.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.635 "hdgst": ${hdgst:-false}, 00:36:37.635 "ddgst": ${ddgst:-false} 00:36:37.635 }, 00:36:37.635 "method": "bdev_nvme_attach_controller" 00:36:37.635 } 00:36:37.635 EOF 00:36:37.635 )") 00:36:37.635 [2024-12-09 05:31:31.847228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.635 [2024-12-09 05:31:31.847295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:37.635 05:31:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:37.635 "params": { 00:36:37.635 "name": "Nvme1", 00:36:37.635 "trtype": "tcp", 00:36:37.635 "traddr": "10.0.0.2", 00:36:37.635 "adrfam": "ipv4", 00:36:37.635 "trsvcid": "4420", 00:36:37.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:37.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:37.635 "hdgst": false, 00:36:37.635 "ddgst": false 00:36:37.635 }, 00:36:37.635 "method": "bdev_nvme_attach_controller" 00:36:37.635 }' 00:36:37.635 [2024-12-09 05:31:31.855199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.635 [2024-12-09 05:31:31.855229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.863220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.863240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.871244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.871290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.879286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.879307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.887307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.887343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.889413] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:37.894 [2024-12-09 05:31:31.889489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571014 ] 00:36:37.894 [2024-12-09 05:31:31.895331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.895357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.903349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.903370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.911367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.911388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.919397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.919419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.927429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.927452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.935449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.935470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.943467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.943488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.951488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.951508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.959516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.959540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.959740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.894 [2024-12-09 05:31:31.967575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.967605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.975599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.975649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.983593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.983614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.991630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.991650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:31.999647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:31.999667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.007666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.007686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.015687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.015707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.021419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.894 [2024-12-09 05:31:32.023723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.023742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.031745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.031765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.039774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.039806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.047804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.047840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.055825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.055863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.063846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.063880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.071866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.071901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.079884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.079920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.087887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.087908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.095917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.095944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.103950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.103987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:37.894 [2024-12-09 05:31:32.111970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:37.894 [2024-12-09 05:31:32.112006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.119979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.120005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.127988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.128008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.136038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.136066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.144043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.144067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.152064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.152086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.160087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.160108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.168111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.168133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.176134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.176156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.184157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.152 [2024-12-09 05:31:32.184179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.152 [2024-12-09 05:31:32.192188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.192211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.238509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.238537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.244362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.244386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.252377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.252416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 Running I/O for 5 seconds... 00:36:38.153 [2024-12-09 05:31:32.267125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.267155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.278133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.278161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.288493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.288522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.299423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.299451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.312250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.312301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.322500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.322529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.333057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.333085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.344006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.344036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.354854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.354889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.365780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.365807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.153 [2024-12-09 05:31:32.376759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.153 [2024-12-09 05:31:32.376787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.389225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.389265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.399542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.399569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.410342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.410370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.422781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.422807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.432363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.432391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.443238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.443267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.455589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.455617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.464866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.464894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.476090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.476117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.486628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.486656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.497627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.497654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.510512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.510541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.520194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.520220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.531556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.531599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.542514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.542542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.553702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.553729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.564381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.564423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.575298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.575327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.587872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.587901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.598030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.598058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.608867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.608895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.619963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.619991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.411 [2024-12-09 05:31:32.631103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.411 [2024-12-09 05:31:32.631131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.644247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.644300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.654287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.654315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.665039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.665067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.677876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.677903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.688225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.688278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.699056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.699084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.669 [2024-12-09 05:31:32.711457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.669 [2024-12-09 05:31:32.711486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.720886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.720914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.732200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.732228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.743215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.743245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.754217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.754244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.767352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.767381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.777457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.777485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.787639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.787667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.798410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.798439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.808902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.808929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.819666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.819694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.830644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.830672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.841520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.841549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.852325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.852354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.863610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.863653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.874531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.874559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.670 [2024-12-09 05:31:32.885734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.670 [2024-12-09 05:31:32.885762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.896600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.896628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.907554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.907583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.918838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.918866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.930038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.930067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.942913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.942941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.953421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.953450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.963837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.963865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.974331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.974360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.985081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.985108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:32.995883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:32.995910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.007841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.007870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.017522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.017551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.029022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.029051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.040030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.040073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.050610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.050638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.061506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.061535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.074456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.074486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.085331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.085359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.096310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.096347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.109315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.109345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.119481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.119509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.130443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.130471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:38.927 [2024-12-09 05:31:33.143726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:38.927 [2024-12-09 05:31:33.143754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.153993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.154021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.164718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.164747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.175754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.175782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.186919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.186947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.199646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.199675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.209661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.209689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.220899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.220928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.231718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.231746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.242488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.242518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.255064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.255092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 11653.00 IOPS, 91.04 MiB/s [2024-12-09T04:31:33.410Z] [2024-12-09 05:31:33.264834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.264862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.275728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.275756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.286767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.286795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.299665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.299693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.309842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.309870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.320789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.320817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.333492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.333520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.343043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.343071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.354295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.354324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.365303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.365331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.376407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.376436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.389356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.389386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.185 [2024-12-09 05:31:33.399592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.185 [2024-12-09 05:31:33.399631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.410408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.410437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.421209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.421237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.432160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.432188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.443265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.443302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.454128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.454155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.464877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.464906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.475372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.475401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.486396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.486425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.497171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.497200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.510042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.510070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.520569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.520612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.531141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.531169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.541568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.541596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.552437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.552466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.565305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.565335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.575134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.575162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.585692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.585720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.597917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.597945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.608109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.608169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.619144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.619181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.631484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.631513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.641580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.641609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.652080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.652107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.443 [2024-12-09 05:31:33.662396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.443 [2024-12-09 05:31:33.662426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.673252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.673291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.684035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.684063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.694979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.695007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.707890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.707917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.718540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.718569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.729077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.729104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.739706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.739733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.750675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.750702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.763532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.763560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.773954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.773981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.784585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.784613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.797244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.797278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.810046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.810074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.819924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.819960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.830692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.830721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.843143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.843185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.853374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.853404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.864136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.864165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.875598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.875626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.886808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.886835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.897773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.897801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.908628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.908657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.701 [2024-12-09 05:31:33.919435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.701 [2024-12-09 05:31:33.919464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.931952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.931980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.941727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.941754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.952348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.952377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.962847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.962876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.973596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.973624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.984407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.984436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:33.997083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:33.997112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.007646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.007674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.018129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.018157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.028878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.028913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.039487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.039514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.050105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.050134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.060799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.060827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.071304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.071340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.081776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.081804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.092472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.092500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.105391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.105421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.115412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.115442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.126303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.126332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.138949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.138978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.150353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.150382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.159391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.159420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:39.958 [2024-12-09 05:31:34.171266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:39.958 [2024-12-09 05:31:34.171303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.183539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.183568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.193979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.194007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.204707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.204735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.217427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.217456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.229489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.229518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.238350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.238379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.249917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.249944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 11758.50 IOPS, 91.86 MiB/s [2024-12-09T04:31:34.441Z] [2024-12-09 05:31:34.260480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.260509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.271319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.271349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.216 [2024-12-09 05:31:34.284474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.216 [2024-12-09 05:31:34.284503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.296543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.296572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.306102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.306145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.317987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.318015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.328352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.328381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.339015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.339044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.349851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.349878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.360413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.360441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.371441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.371470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.384048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.384076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.395967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.395995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.405981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.406009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.416946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.416974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.427669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.427697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.217 [2024-12-09 05:31:34.438164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.217 [2024-12-09 05:31:34.438192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.450917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.450945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.460968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.461010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.471475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.471504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.482377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.482406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.493189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.493217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.504343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.504372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.516906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.516934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.526903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.526930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.538136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.538164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.551559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.551603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.561763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.561791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.572618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.572647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.583809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.583837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.594805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.594832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.474 [2024-12-09 05:31:34.607302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.474 [2024-12-09 05:31:34.607331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.616862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.616890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.627499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.627528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.638146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.638174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.648704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.648742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.659487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.659517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.671944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.671973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.681770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.681798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.475 [2024-12-09 05:31:34.693334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.475 [2024-12-09 05:31:34.693363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.705881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.705909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.716023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.716052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.727450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.727479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.738163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.738191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.749009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.749038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.759696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.731 [2024-12-09 05:31:34.759740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.731 [2024-12-09 05:31:34.771031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.771059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.782115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.782143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.792612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.792654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.803656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.803684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.816030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.816059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.826493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.826523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.837141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.837169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.847850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.847878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.858450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.858486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.871531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.871560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.881575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.881617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.892506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.892534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.905452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.905480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.915490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.915518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.925970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.925997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.936336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.936364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.732 [2024-12-09 05:31:34.946967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.732 [2024-12-09 05:31:34.946994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:34.957810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:34.957837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:34.970359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:34.970387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:34.980493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:34.980521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:34.991638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:34.991665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.002362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.002390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.012679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.012707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.023749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.023776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.036484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.036512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.048058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.048085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.057315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.057344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.069061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.069096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.079891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.079920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.091121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.091149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.102030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.102058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.112911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.112940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.125750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.125777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.136384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.136414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.147245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.147300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.160213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.160241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.170526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.170554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.181180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.181207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.194037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.194065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.204626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.204654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:40.997 [2024-12-09 05:31:35.215655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:40.997 [2024-12-09 05:31:35.215683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.228228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.228280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.238373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.238407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.249437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.249466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.260186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.260213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 11766.00 IOPS, 91.92 MiB/s [2024-12-09T04:31:35.479Z] [2024-12-09 05:31:35.271420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.271449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.282005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.282034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.292458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.292488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.254 [2024-12-09 05:31:35.303532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.254 [2024-12-09 05:31:35.303560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.316240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.316294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.326540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.326569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.337717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.337746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.350674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.350703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.360683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.360711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.371656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.371686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.382686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.382714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.393665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.393694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.406199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.406226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.416117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.416145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.426961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.426989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.437745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.437772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.448411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.448440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.461215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.461243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.255 [2024-12-09 05:31:35.471364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.255 [2024-12-09 05:31:35.471393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.481776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.481818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.492799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.492827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.503818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.503845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.516852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.516879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.527409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.527438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.537620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.537648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.548683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.548711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.561610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.561638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.571643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.571671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.582389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.582417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.593144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.593172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.604437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.604465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.615439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.615469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.628347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.628377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.638594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.638621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.648869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.648898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.660139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.660168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.670738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.670781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.681849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.681877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.694840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.694867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.705289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.705318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.715690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.715733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.512 [2024-12-09 05:31:35.726687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.512 [2024-12-09 05:31:35.726716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.738933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.738961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.748885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.748913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.760381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.760410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.773884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.773913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.784085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.784113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.794907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.794935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.807471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.807500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.816956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.816983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.828289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.828328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.838772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.838801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.849631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.849659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.862541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.862584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.874334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.874363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.883085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.883113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.894580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.894623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.907114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.907144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.916324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.916353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.927108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.927137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.937790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.937818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.951080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.951108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.961183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.961211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.972060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.972087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:41.770 [2024-12-09 05:31:35.984721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:41.770 [2024-12-09 05:31:35.984749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:35.996680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:35.996709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.005926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.005954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.017710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.017738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.031137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.031165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.041353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.041382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.051983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.052026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.062626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.062653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.073262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.073310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.083645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.083672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.094409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.094436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.104994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.105033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.116170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.116207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.126918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.126945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.137932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.137959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.150537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.150580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.162094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.162121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.171690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.171717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.182900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.182942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.195963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.195990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.206360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.206388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.217122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.217149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.228237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.228290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.238955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.238983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.028 [2024-12-09 05:31:36.251519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.028 [2024-12-09 05:31:36.251548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.262011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.262038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 11776.25 IOPS, 92.00 MiB/s [2024-12-09T04:31:36.510Z] [2024-12-09 05:31:36.272427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.272455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.283219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.283246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.293969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.293997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.304866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.304894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.315628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.315657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.328645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.328683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.340104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.340146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.349209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.349237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.360865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.360893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.371086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.371114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.382173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.382201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.395008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.395036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.405286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.405314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.416374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.416402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.428750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.428777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.438925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.438952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.449457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.449485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.462176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.462203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.472337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.472366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.482656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.482684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.493008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.493036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.285 [2024-12-09 05:31:36.503205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.285 [2024-12-09 05:31:36.503233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.513997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.514040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.524716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.524744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.535925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.535962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.546743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.546771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.557331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.557360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.570331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.570376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.580689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.580718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.591535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.591564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.604387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.604416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.614544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.614587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.625613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.625641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.638234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.638284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.648535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.648563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.659484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.659512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.672132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.672161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.683988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.684017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.693006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.693034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.705005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.705033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.715774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.715802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.726661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.726689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.737659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.737687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.543 [2024-12-09 05:31:36.748388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.543 [2024-12-09 05:31:36.748418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.544 [2024-12-09 05:31:36.761035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.544 [2024-12-09 05:31:36.761078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.770993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.771022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.781905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.781934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.794552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.794597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.804895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.804937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.815842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.815871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.826512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.826541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.837385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.837414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.849749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.849777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.860049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.860077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.870666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.870694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.881512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.881541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.894009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.894037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.903794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.903823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.915373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.915403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.928973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.929002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.939589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.939618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.950604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.950633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.961555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.961599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.972679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.972706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.983900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.983939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:36.994898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:36.994942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:37.006196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:37.006224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:42.800 [2024-12-09 05:31:37.017074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:42.800 [2024-12-09 05:31:37.017102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.029554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.029582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.039683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.039711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.050381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.050410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.063203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.063230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.073161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.073189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.083765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.083792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.096518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.096546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.106937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.106966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.117615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.117643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.128964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.128992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.139598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.139627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.150089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.150118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.160975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.161020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.171848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.171876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.191758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.191788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.202193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.202220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.213084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.213112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.224117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.224145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.234779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.234807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.245406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.245434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.256424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.256452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 11783.00 IOPS, 92.05 MiB/s [2024-12-09T04:31:37.283Z] [2024-12-09 05:31:37.268883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.268909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 [2024-12-09 05:31:37.276965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.058 [2024-12-09 05:31:37.276990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.058 00:36:43.058 Latency(us) 00:36:43.058 [2024-12-09T04:31:37.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.058 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:43.058 Nvme1n1 : 5.01 11784.79 92.07 0.00 0.00 10847.95 4538.97 19709.35 00:36:43.058 [2024-12-09T04:31:37.283Z] =================================================================================================================== 00:36:43.058 [2024-12-09T04:31:37.283Z] Total : 11784.79 92.07 0.00 0.00 10847.95 4538.97 19709.35 00:36:43.316 [2024-12-09 05:31:37.285005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.285030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.293015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.293037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.301055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.301085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.309110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.309161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.317131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.317182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.325153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.325211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.333172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.333217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.341186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.341230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.349215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.349261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.357235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.357289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.365263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.365314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.373288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.373331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.381320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.381364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.389337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.389379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.397363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.397405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.405375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.405418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.413383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.413418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.421377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.421399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.316 [2024-12-09 05:31:37.429400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.316 [2024-12-09 05:31:37.429422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.437414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.437434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.445440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.445463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.453509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.453552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.461534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.461591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.469502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.469524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.477538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.477593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.485544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.485579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.493587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.493608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.501598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.501632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.509634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.509654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.517640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.517660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 [2024-12-09 05:31:37.525647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:43.317 [2024-12-09 05:31:37.525682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:43.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (571014) - No such process 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 571014 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.317 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.575 delay0 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.575 05:31:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:43.575 [2024-12-09 05:31:37.694411] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:51.673 [2024-12-09 05:31:44.771776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38380 is same with the state(6) to be set 00:36:51.673 Initializing NVMe Controllers 00:36:51.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:51.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:51.673 Initialization complete. Launching workers. 00:36:51.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 17008 00:36:51.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17179, failed to submit 92 00:36:51.673 success 17074, unsuccessful 105, failed 0 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:51.673 rmmod nvme_tcp 00:36:51.673 rmmod nvme_fabrics 00:36:51.673 rmmod nvme_keyring 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 569664 ']' 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 569664 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 569664 ']' 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 569664 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569664 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569664' 00:36:51.673 killing process with pid 569664 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 569664 00:36:51.673 05:31:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 569664 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.673 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.053 00:36:53.053 real 0m28.949s 00:36:53.053 user 0m42.449s 00:36:53.053 sys 0m9.001s 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:53.053 ************************************ 00:36:53.053 END TEST nvmf_zcopy 00:36:53.053 ************************************ 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.053 05:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:36:53.312 ************************************ 00:36:53.312 START TEST nvmf_nmic 00:36:53.312 ************************************ 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:36:53.312 * Looking for test storage... 00:36:53.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.312 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:53.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.313 --rc genhtml_branch_coverage=1 00:36:53.313 --rc genhtml_function_coverage=1 00:36:53.313 --rc genhtml_legend=1 00:36:53.313 --rc geninfo_all_blocks=1 00:36:53.313 --rc geninfo_unexecuted_blocks=1 00:36:53.313 00:36:53.313 ' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:53.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.313 --rc genhtml_branch_coverage=1 00:36:53.313 --rc genhtml_function_coverage=1 00:36:53.313 --rc genhtml_legend=1 00:36:53.313 --rc geninfo_all_blocks=1 00:36:53.313 --rc geninfo_unexecuted_blocks=1 00:36:53.313 00:36:53.313 ' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:53.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.313 --rc genhtml_branch_coverage=1 00:36:53.313 --rc genhtml_function_coverage=1 00:36:53.313 --rc genhtml_legend=1 00:36:53.313 --rc geninfo_all_blocks=1 00:36:53.313 --rc geninfo_unexecuted_blocks=1 00:36:53.313 00:36:53.313 ' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:53.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.313 --rc genhtml_branch_coverage=1 00:36:53.313 --rc genhtml_function_coverage=1 00:36:53.313 --rc genhtml_legend=1 00:36:53.313 --rc geninfo_all_blocks=1 00:36:53.313 --rc geninfo_unexecuted_blocks=1 00:36:53.313 00:36:53.313 ' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.313 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:53.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.314 05:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:55.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:55.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:55.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:55.847 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:55.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:55.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:55.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:36:55.848 00:36:55.848 --- 10.0.0.2 ping statistics --- 00:36:55.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.848 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:55.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:55.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:36:55.848 00:36:55.848 --- 10.0.0.1 ping statistics --- 00:36:55.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.848 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=574544 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 574544 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 574544 ']' 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:55.848 05:31:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:55.848 [2024-12-09 05:31:49.922438] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:36:55.848 [2024-12-09 05:31:49.922522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.848 [2024-12-09 05:31:49.994511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:55.848 [2024-12-09 05:31:50.061027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:55.848 [2024-12-09 05:31:50.061085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:55.848 [2024-12-09 05:31:50.061109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:55.848 [2024-12-09 05:31:50.061120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:55.848 [2024-12-09 05:31:50.061129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:55.848 [2024-12-09 05:31:50.062587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.848 [2024-12-09 05:31:50.062719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:55.848 [2024-12-09 05:31:50.062774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:55.848 [2024-12-09 05:31:50.062778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.106 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 [2024-12-09 05:31:50.217073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 Malloc0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 [2024-12-09 05:31:50.277706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:56.107 test case1: single bdev can't be used in multiple subsystems 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 [2024-12-09 05:31:50.301457] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:56.107 [2024-12-09 05:31:50.301487] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:56.107 [2024-12-09 05:31:50.301501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:56.107 request: 00:36:56.107 { 00:36:56.107 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:56.107 "namespace": { 00:36:56.107 "bdev_name": "Malloc0", 00:36:56.107 "no_auto_visible": false, 00:36:56.107 "hide_metadata": false 00:36:56.107 }, 00:36:56.107 "method": "nvmf_subsystem_add_ns", 00:36:56.107 "req_id": 1 00:36:56.107 } 00:36:56.107 Got JSON-RPC error response 00:36:56.107 response: 00:36:56.107 { 00:36:56.107 "code": -32602, 00:36:56.107 "message": "Invalid parameters" 00:36:56.107 } 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:56.107 Adding namespace failed - expected result. 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:56.107 test case2: host connect to nvmf target in multiple paths 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:56.107 [2024-12-09 05:31:50.309594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.107 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:57.039 05:31:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:57.603 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:57.603 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:57.603 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:57.603 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:57.603 05:31:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:59.502 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:59.502 [global] 00:36:59.502 thread=1 00:36:59.502 invalidate=1 00:36:59.502 rw=write 00:36:59.502 time_based=1 00:36:59.502 runtime=1 00:36:59.502 ioengine=libaio 00:36:59.502 direct=1 00:36:59.502 bs=4096 00:36:59.502 iodepth=1 00:36:59.502 norandommap=0 00:36:59.502 numjobs=1 00:36:59.502 00:36:59.502 verify_dump=1 00:36:59.502 verify_backlog=512 00:36:59.502 verify_state_save=0 00:36:59.502 do_verify=1 00:36:59.502 verify=crc32c-intel 00:36:59.502 [job0] 00:36:59.502 filename=/dev/nvme0n1 00:36:59.502 Could not set queue depth (nvme0n1) 00:36:59.760 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:59.760 fio-3.35 00:36:59.760 Starting 1 thread 00:37:01.133 00:37:01.133 job0: (groupid=0, jobs=1): err= 0: pid=575066: Mon Dec 9 05:31:54 2024 00:37:01.133 read: IOPS=2327, BW=9311KiB/s (9534kB/s)(9320KiB/1001msec) 00:37:01.133 slat (nsec): min=4489, max=54901, avg=12268.01, stdev=7076.96 00:37:01.133 clat (usec): min=176, max=447, avg=222.35, stdev=27.79 00:37:01.133 lat (usec): min=183, max=479, avg=234.62, stdev=32.35 00:37:01.133 clat percentiles (usec): 00:37:01.133 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:37:01.133 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:37:01.133 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 255], 95.00th=[ 285], 00:37:01.133 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 375], 99.95th=[ 408], 00:37:01.133 | 99.99th=[ 449] 00:37:01.133 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:01.133 slat (nsec): min=5809, max=61097, avg=13544.27, stdev=5259.44 00:37:01.133 clat (usec): min=125, max=297, avg=156.63, stdev=15.78 00:37:01.133 lat (usec): min=131, max=336, avg=170.18, stdev=18.00 00:37:01.133 clat percentiles (usec): 00:37:01.133 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:37:01.133 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:37:01.133 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 190], 00:37:01.133 | 99.00th=[ 206], 99.50th=[ 219], 99.90th=[ 262], 99.95th=[ 262], 00:37:01.133 | 99.99th=[ 297] 00:37:01.133 bw ( KiB/s): min=12168, max=12168, per=100.00%, avg=12168.00, stdev= 0.00, samples=1 00:37:01.133 iops : min= 3042, max= 3042, avg=3042.00, stdev= 0.00, samples=1 00:37:01.134 lat (usec) : 250=94.93%, 500=5.07% 00:37:01.134 cpu : usr=3.50%, sys=6.50%, ctx=4890, majf=0, minf=1 00:37:01.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:01.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.134 issued rwts: total=2330,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:01.134 00:37:01.134 Run status group 0 (all jobs): 00:37:01.134 READ: bw=9311KiB/s (9534kB/s), 9311KiB/s-9311KiB/s (9534kB/s-9534kB/s), io=9320KiB (9544kB), run=1001-1001msec 00:37:01.134 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:37:01.134 00:37:01.134 Disk stats (read/write): 00:37:01.134 nvme0n1: ios=2098/2380, merge=0/0, ticks=450/364, in_queue=814, util=91.58% 00:37:01.134 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:01.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.134 rmmod nvme_tcp 00:37:01.134 rmmod nvme_fabrics 00:37:01.134 rmmod nvme_keyring 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 574544 ']' 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 574544 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 574544 ']' 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 574544 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 574544 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 574544' 00:37:01.134 killing process with pid 574544 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 574544 00:37:01.134 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 574544 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.393 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:03.300 00:37:03.300 real 0m10.171s 00:37:03.300 user 0m22.079s 00:37:03.300 sys 0m2.697s 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:03.300 ************************************ 00:37:03.300 END TEST nvmf_nmic 00:37:03.300 ************************************ 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:37:03.300 ************************************ 00:37:03.300 START TEST nvmf_fio_target 00:37:03.300 ************************************ 00:37:03.300 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:37:03.559 * Looking for test storage... 00:37:03.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.559 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:03.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.560 --rc genhtml_branch_coverage=1 00:37:03.560 --rc genhtml_function_coverage=1 00:37:03.560 --rc genhtml_legend=1 00:37:03.560 --rc geninfo_all_blocks=1 00:37:03.560 --rc geninfo_unexecuted_blocks=1 00:37:03.560 00:37:03.560 ' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:03.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.560 --rc genhtml_branch_coverage=1 00:37:03.560 --rc genhtml_function_coverage=1 00:37:03.560 --rc genhtml_legend=1 00:37:03.560 --rc geninfo_all_blocks=1 00:37:03.560 --rc geninfo_unexecuted_blocks=1 00:37:03.560 00:37:03.560 ' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:03.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.560 --rc genhtml_branch_coverage=1 00:37:03.560 --rc genhtml_function_coverage=1 00:37:03.560 --rc genhtml_legend=1 00:37:03.560 --rc geninfo_all_blocks=1 00:37:03.560 --rc geninfo_unexecuted_blocks=1 00:37:03.560 00:37:03.560 ' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:03.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.560 --rc genhtml_branch_coverage=1 00:37:03.560 --rc genhtml_function_coverage=1 00:37:03.560 --rc genhtml_legend=1 00:37:03.560 --rc geninfo_all_blocks=1 00:37:03.560 --rc geninfo_unexecuted_blocks=1 00:37:03.560 00:37:03.560 ' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.560 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.463 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:05.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:05.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:05.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:05.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:37:05.723 00:37:05.723 --- 10.0.0.2 ping statistics --- 00:37:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.723 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:37:05.723 00:37:05.723 --- 10.0.0.1 ping statistics --- 00:37:05.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.723 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.723 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=577176 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 577176 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 577176 ']' 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:05.724 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:05.724 [2024-12-09 05:31:59.900382] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:37:05.724 [2024-12-09 05:31:59.900463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:05.982 [2024-12-09 05:31:59.974288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:05.982 [2024-12-09 05:32:00.033414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.982 [2024-12-09 05:32:00.033482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.982 [2024-12-09 05:32:00.033507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.982 [2024-12-09 05:32:00.033519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.982 [2024-12-09 05:32:00.033528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.982 [2024-12-09 05:32:00.035022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.982 [2024-12-09 05:32:00.035102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.982 [2024-12-09 05:32:00.035197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:05.982 [2024-12-09 05:32:00.035200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.982 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:06.240 [2024-12-09 05:32:00.421016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.241 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:06.806 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:06.806 05:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.063 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:07.063 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.320 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:07.320 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:07.578 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:07.578 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:07.835 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.092 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:08.092 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.349 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:08.349 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:08.606 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:08.606 05:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:08.863 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:09.120 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:09.120 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:09.377 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:09.378 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:09.635 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.892 [2024-12-09 05:32:04.062550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.892 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:10.150 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:10.407 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:11.340 05:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:13.235 05:32:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:13.235 [global] 00:37:13.235 thread=1 00:37:13.235 invalidate=1 00:37:13.235 rw=write 00:37:13.235 time_based=1 00:37:13.235 runtime=1 00:37:13.235 ioengine=libaio 00:37:13.235 direct=1 00:37:13.235 bs=4096 00:37:13.235 iodepth=1 00:37:13.235 norandommap=0 00:37:13.235 numjobs=1 00:37:13.235 00:37:13.235 verify_dump=1 00:37:13.235 verify_backlog=512 00:37:13.235 verify_state_save=0 00:37:13.235 do_verify=1 00:37:13.235 verify=crc32c-intel 00:37:13.235 [job0] 00:37:13.235 filename=/dev/nvme0n1 00:37:13.235 [job1] 00:37:13.235 filename=/dev/nvme0n2 00:37:13.235 [job2] 00:37:13.235 filename=/dev/nvme0n3 00:37:13.235 [job3] 00:37:13.235 filename=/dev/nvme0n4 00:37:13.235 Could not set queue depth (nvme0n1) 00:37:13.235 Could not set queue depth (nvme0n2) 00:37:13.235 Could not set queue depth (nvme0n3) 00:37:13.235 Could not set queue depth (nvme0n4) 00:37:13.492 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:13.493 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:13.493 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:13.493 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:13.493 fio-3.35 00:37:13.493 Starting 4 threads 00:37:14.866 00:37:14.866 job0: (groupid=0, jobs=1): err= 0: pid=578227: Mon Dec 9 05:32:08 2024 00:37:14.866 read: IOPS=677, BW=2710KiB/s (2775kB/s)(2808KiB/1036msec) 00:37:14.866 slat (nsec): min=7134, max=48913, avg=14777.95, stdev=5772.49 00:37:14.866 clat (usec): min=202, max=42032, avg=1138.80, stdev=5964.30 00:37:14.866 lat (usec): min=210, max=42047, avg=1153.58, stdev=5964.76 00:37:14.866 clat percentiles (usec): 00:37:14.866 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:37:14.866 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:37:14.866 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 396], 00:37:14.866 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:14.866 | 99.99th=[42206] 00:37:14.866 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:37:14.866 slat (nsec): min=5910, max=51654, avg=15591.17, stdev=6619.11 00:37:14.866 clat (usec): min=140, max=325, avg=197.24, stdev=26.89 00:37:14.866 lat (usec): min=150, max=358, avg=212.83, stdev=25.33 00:37:14.866 clat percentiles (usec): 00:37:14.866 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 174], 00:37:14.866 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 204], 00:37:14.866 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 243], 00:37:14.866 | 99.00th=[ 255], 99.50th=[ 260], 99.90th=[ 281], 99.95th=[ 326], 00:37:14.866 | 99.99th=[ 326] 00:37:14.866 bw ( KiB/s): min= 8192, max= 8192, per=43.81%, avg=8192.00, stdev= 0.00, samples=1 00:37:14.866 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:14.866 lat (usec) : 250=76.83%, 500=21.78%, 750=0.52% 00:37:14.866 lat (msec) : 50=0.87% 00:37:14.866 cpu : usr=2.03%, sys=3.29%, ctx=1726, majf=0, minf=1 00:37:14.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.866 issued rwts: total=702,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.866 job1: (groupid=0, jobs=1): err= 0: pid=578228: Mon Dec 9 05:32:08 2024 00:37:14.866 read: IOPS=506, BW=2026KiB/s (2074kB/s)(2052KiB/1013msec) 00:37:14.866 slat (nsec): min=7192, max=56438, avg=15392.15, stdev=6276.02 00:37:14.866 clat (usec): min=208, max=41687, avg=1473.40, stdev=6697.42 00:37:14.866 lat (usec): min=216, max=41706, avg=1488.79, stdev=6697.88 00:37:14.866 clat percentiles (usec): 00:37:14.866 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 258], 00:37:14.866 | 30.00th=[ 293], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 351], 00:37:14.866 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 408], 00:37:14.866 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:14.866 | 99.99th=[41681] 00:37:14.866 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:37:14.866 slat (nsec): min=7983, max=65823, avg=15799.69, stdev=6675.05 00:37:14.866 clat (usec): min=150, max=570, avg=220.89, stdev=43.20 00:37:14.866 lat (usec): min=160, max=585, avg=236.69, stdev=44.04 00:37:14.866 clat percentiles (usec): 00:37:14.866 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:37:14.866 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:37:14.866 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 302], 00:37:14.866 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 453], 99.95th=[ 570], 00:37:14.866 | 99.99th=[ 570] 00:37:14.866 bw ( KiB/s): min= 4096, max= 4096, per=21.91%, avg=4096.00, stdev= 0.00, samples=2 00:37:14.866 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:37:14.866 lat (usec) : 250=59.27%, 500=39.62%, 750=0.13% 00:37:14.866 lat (msec) : 50=0.98% 00:37:14.866 cpu : usr=1.78%, sys=3.06%, ctx=1537, majf=0, minf=1 00:37:14.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.866 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.866 job2: (groupid=0, jobs=1): err= 0: pid=578235: Mon Dec 9 05:32:08 2024 00:37:14.866 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:37:14.866 slat (nsec): min=4452, max=73935, avg=13408.38, stdev=8814.23 00:37:14.866 clat (usec): min=172, max=2024, avg=250.53, stdev=72.08 00:37:14.866 lat (usec): min=177, max=2049, avg=263.94, stdev=76.35 00:37:14.866 clat percentiles (usec): 00:37:14.866 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:37:14.866 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 241], 00:37:14.866 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 388], 00:37:14.866 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 529], 99.95th=[ 537], 00:37:14.866 | 99.99th=[ 2024] 00:37:14.866 write: IOPS=2280, BW=9123KiB/s (9342kB/s)(9132KiB/1001msec); 0 zone resets 00:37:14.866 slat (nsec): min=6644, max=53079, avg=13607.96, stdev=5388.80 00:37:14.866 clat (usec): min=129, max=500, avg=180.48, stdev=41.44 00:37:14.866 lat (usec): min=137, max=516, avg=194.09, stdev=42.24 00:37:14.866 clat percentiles (usec): 00:37:14.867 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:37:14.867 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 176], 00:37:14.867 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 243], 00:37:14.867 | 99.00th=[ 334], 99.50th=[ 375], 99.90th=[ 424], 99.95th=[ 474], 00:37:14.867 | 99.99th=[ 502] 00:37:14.867 bw ( KiB/s): min= 8192, max= 8192, per=43.81%, avg=8192.00, stdev= 0.00, samples=1 00:37:14.867 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:14.867 lat (usec) : 250=82.15%, 500=17.66%, 750=0.16% 00:37:14.867 lat (msec) : 4=0.02% 00:37:14.867 cpu : usr=3.20%, sys=6.00%, ctx=4331, majf=0, minf=1 00:37:14.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.867 issued rwts: total=2048,2283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.867 job3: (groupid=0, jobs=1): err= 0: pid=578236: Mon Dec 9 05:32:08 2024 00:37:14.867 read: IOPS=52, BW=210KiB/s (215kB/s)(212KiB/1011msec) 00:37:14.867 slat (nsec): min=6353, max=33602, avg=18070.36, stdev=8243.79 00:37:14.867 clat (usec): min=244, max=42032, avg=16745.48, stdev=20389.32 00:37:14.867 lat (usec): min=251, max=42048, avg=16763.55, stdev=20388.10 00:37:14.867 clat percentiles (usec): 00:37:14.867 | 1.00th=[ 245], 5.00th=[ 262], 10.00th=[ 277], 20.00th=[ 359], 00:37:14.867 | 30.00th=[ 388], 40.00th=[ 441], 50.00th=[ 474], 60.00th=[ 523], 00:37:14.867 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:14.867 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:14.867 | 99.99th=[42206] 00:37:14.867 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:37:14.867 slat (nsec): min=6376, max=48128, avg=10037.93, stdev=4177.51 00:37:14.867 clat (usec): min=159, max=550, avg=225.41, stdev=27.08 00:37:14.867 lat (usec): min=167, max=583, avg=235.45, stdev=27.30 00:37:14.867 clat percentiles (usec): 00:37:14.867 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:37:14.867 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 239], 00:37:14.867 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:37:14.867 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 553], 99.95th=[ 553], 00:37:14.867 | 99.99th=[ 553] 00:37:14.867 bw ( KiB/s): min= 4096, max= 4096, per=21.91%, avg=4096.00, stdev= 0.00, samples=1 00:37:14.867 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:14.867 lat (usec) : 250=85.31%, 500=10.44%, 750=0.53% 00:37:14.867 lat (msec) : 50=3.72% 00:37:14.867 cpu : usr=0.40%, sys=0.40%, ctx=565, majf=0, minf=1 00:37:14.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.867 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.867 00:37:14.867 Run status group 0 (all jobs): 00:37:14.867 READ: bw=12.5MiB/s (13.1MB/s), 210KiB/s-8184KiB/s (215kB/s-8380kB/s), io=13.0MiB (13.6MB), run=1001-1036msec 00:37:14.867 WRITE: bw=18.3MiB/s (19.1MB/s), 2026KiB/s-9123KiB/s (2074kB/s-9342kB/s), io=18.9MiB (19.8MB), run=1001-1036msec 00:37:14.867 00:37:14.867 Disk stats (read/write): 00:37:14.867 nvme0n1: ios=746/1024, merge=0/0, ticks=549/172, in_queue=721, util=83.77% 00:37:14.867 nvme0n2: ios=510/512, merge=0/0, ticks=647/111, in_queue=758, util=85.30% 00:37:14.867 nvme0n3: ios=1637/2048, merge=0/0, ticks=381/334, in_queue=715, util=88.60% 00:37:14.867 nvme0n4: ios=48/512, merge=0/0, ticks=678/112, in_queue=790, util=89.37% 00:37:14.867 05:32:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:14.867 [global] 00:37:14.867 thread=1 00:37:14.867 invalidate=1 00:37:14.867 rw=randwrite 00:37:14.867 time_based=1 00:37:14.867 runtime=1 00:37:14.867 ioengine=libaio 00:37:14.867 direct=1 00:37:14.867 bs=4096 00:37:14.867 iodepth=1 00:37:14.867 norandommap=0 00:37:14.867 numjobs=1 00:37:14.867 00:37:14.867 verify_dump=1 00:37:14.867 verify_backlog=512 00:37:14.867 verify_state_save=0 00:37:14.867 do_verify=1 00:37:14.867 verify=crc32c-intel 00:37:14.867 [job0] 00:37:14.867 filename=/dev/nvme0n1 00:37:14.867 [job1] 00:37:14.867 filename=/dev/nvme0n2 00:37:14.867 [job2] 00:37:14.867 filename=/dev/nvme0n3 00:37:14.867 [job3] 00:37:14.867 filename=/dev/nvme0n4 00:37:14.867 Could not set queue depth (nvme0n1) 00:37:14.867 Could not set queue depth (nvme0n2) 00:37:14.867 Could not set queue depth (nvme0n3) 00:37:14.867 Could not set queue depth (nvme0n4) 00:37:14.867 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.867 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.867 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.867 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:14.867 fio-3.35 00:37:14.867 Starting 4 threads 00:37:16.241 00:37:16.241 job0: (groupid=0, jobs=1): err= 0: pid=578581: Mon Dec 9 05:32:10 2024 00:37:16.241 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:37:16.241 slat (nsec): min=13576, max=34525, avg=24920.86, stdev=9232.43 00:37:16.241 clat (usec): min=40872, max=42932, avg=41300.66, stdev=565.46 00:37:16.241 lat (usec): min=40906, max=42950, avg=41325.58, stdev=565.98 00:37:16.241 clat percentiles (usec): 00:37:16.241 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:16.241 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:16.241 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:16.241 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:16.241 | 99.99th=[42730] 00:37:16.241 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:37:16.241 slat (nsec): min=6440, max=60039, avg=17079.97, stdev=7152.02 00:37:16.241 clat (usec): min=164, max=458, avg=236.66, stdev=34.88 00:37:16.241 lat (usec): min=172, max=470, avg=253.74, stdev=33.87 00:37:16.241 clat percentiles (usec): 00:37:16.241 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:37:16.241 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:37:16.241 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:37:16.241 | 99.00th=[ 404], 99.50th=[ 445], 99.90th=[ 457], 99.95th=[ 457], 00:37:16.241 | 99.99th=[ 457] 00:37:16.241 bw ( KiB/s): min= 4096, max= 4096, per=44.49%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.241 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.241 lat (usec) : 250=74.67%, 500=21.39% 00:37:16.241 lat (msec) : 50=3.94% 00:37:16.241 cpu : usr=0.70%, sys=0.90%, ctx=533, majf=0, minf=1 00:37:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.241 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.241 job1: (groupid=0, jobs=1): err= 0: pid=578582: Mon Dec 9 05:32:10 2024 00:37:16.241 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:37:16.241 slat (nsec): min=15094, max=32907, avg=24924.68, stdev=8311.85 00:37:16.241 clat (usec): min=40887, max=42006, avg=41084.81, stdev=330.75 00:37:16.241 lat (usec): min=40920, max=42039, avg=41109.74, stdev=331.71 00:37:16.241 clat percentiles (usec): 00:37:16.241 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:16.241 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:16.241 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:37:16.241 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.241 | 99.99th=[42206] 00:37:16.241 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:37:16.241 slat (nsec): min=5938, max=41204, avg=14072.06, stdev=6177.95 00:37:16.241 clat (usec): min=138, max=343, avg=198.79, stdev=33.68 00:37:16.241 lat (usec): min=145, max=352, avg=212.87, stdev=35.00 00:37:16.241 clat percentiles (usec): 00:37:16.241 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:37:16.241 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:37:16.241 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 281], 00:37:16.241 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 343], 00:37:16.241 | 99.99th=[ 343] 00:37:16.241 bw ( KiB/s): min= 4096, max= 4096, per=44.49%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.241 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.241 lat (usec) : 250=88.20%, 500=7.68% 00:37:16.241 lat (msec) : 50=4.12% 00:37:16.241 cpu : usr=0.59%, sys=0.49%, ctx=535, majf=0, minf=1 00:37:16.241 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.241 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.241 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.241 job2: (groupid=0, jobs=1): err= 0: pid=578583: Mon Dec 9 05:32:10 2024 00:37:16.241 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:37:16.241 slat (nsec): min=15325, max=34976, avg=25480.91, stdev=9003.18 00:37:16.241 clat (usec): min=40857, max=42050, avg=41274.66, stdev=468.87 00:37:16.241 lat (usec): min=40892, max=42076, avg=41300.14, stdev=470.66 00:37:16.241 clat percentiles (usec): 00:37:16.241 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:16.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:16.242 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:16.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.242 | 99.99th=[42206] 00:37:16.242 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:37:16.242 slat (nsec): min=6782, max=51032, avg=16244.74, stdev=6911.49 00:37:16.242 clat (usec): min=145, max=1245, avg=211.47, stdev=80.63 00:37:16.242 lat (usec): min=159, max=1252, avg=227.71, stdev=80.41 00:37:16.242 clat percentiles (usec): 00:37:16.242 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:37:16.242 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 204], 00:37:16.242 | 70.00th=[ 217], 80.00th=[ 237], 90.00th=[ 269], 95.00th=[ 297], 00:37:16.242 | 99.00th=[ 529], 99.50th=[ 816], 99.90th=[ 1237], 99.95th=[ 1237], 00:37:16.242 | 99.99th=[ 1237] 00:37:16.242 bw ( KiB/s): min= 4096, max= 4096, per=44.49%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.242 lat (usec) : 250=82.40%, 500=12.36%, 750=0.56%, 1000=0.37% 00:37:16.242 lat (msec) : 2=0.19%, 50=4.12% 00:37:16.242 cpu : usr=0.29%, sys=0.88%, ctx=535, majf=0, minf=1 00:37:16.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.242 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.242 job3: (groupid=0, jobs=1): err= 0: pid=578584: Mon Dec 9 05:32:10 2024 00:37:16.242 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:37:16.242 slat (nsec): min=7120, max=36096, avg=8695.34, stdev=4223.75 00:37:16.242 clat (usec): min=180, max=42899, avg=1576.35, stdev=7384.22 00:37:16.242 lat (usec): min=188, max=42919, avg=1585.05, stdev=7388.06 00:37:16.242 clat percentiles (usec): 00:37:16.242 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:37:16.242 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:37:16.242 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 293], 00:37:16.242 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:16.242 | 99.99th=[42730] 00:37:16.242 write: IOPS=829, BW=3317KiB/s (3396kB/s)(3320KiB/1001msec); 0 zone resets 00:37:16.242 slat (nsec): min=8059, max=62390, avg=16735.29, stdev=9005.55 00:37:16.242 clat (usec): min=139, max=475, avg=204.26, stdev=44.02 00:37:16.242 lat (usec): min=148, max=516, avg=220.99, stdev=48.31 00:37:16.242 clat percentiles (usec): 00:37:16.242 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:37:16.242 | 30.00th=[ 167], 40.00th=[ 194], 50.00th=[ 208], 60.00th=[ 219], 00:37:16.242 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 273], 00:37:16.242 | 99.00th=[ 314], 99.50th=[ 379], 99.90th=[ 478], 99.95th=[ 478], 00:37:16.242 | 99.99th=[ 478] 00:37:16.242 bw ( KiB/s): min= 4096, max= 4096, per=44.49%, avg=4096.00, stdev= 0.00, samples=1 00:37:16.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:16.242 lat (usec) : 250=88.38%, 500=10.28%, 1000=0.07% 00:37:16.242 lat (msec) : 50=1.27% 00:37:16.242 cpu : usr=1.40%, sys=2.20%, ctx=1343, majf=0, minf=1 00:37:16.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.242 issued rwts: total=512,830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.242 00:37:16.242 Run status group 0 (all jobs): 00:37:16.242 READ: bw=2245KiB/s (2299kB/s), 83.9KiB/s-2046KiB/s (85.9kB/s-2095kB/s), io=2308KiB (2363kB), run=1001-1028msec 00:37:16.242 WRITE: bw=9206KiB/s (9427kB/s), 1992KiB/s-3317KiB/s (2040kB/s-3396kB/s), io=9464KiB (9691kB), run=1001-1028msec 00:37:16.242 00:37:16.242 Disk stats (read/write): 00:37:16.242 nvme0n1: ios=67/512, merge=0/0, ticks=720/113, in_queue=833, util=86.67% 00:37:16.242 nvme0n2: ios=52/512, merge=0/0, ticks=967/96, in_queue=1063, util=97.36% 00:37:16.242 nvme0n3: ios=75/512, merge=0/0, ticks=891/101, in_queue=992, util=98.44% 00:37:16.242 nvme0n4: ios=150/512, merge=0/0, ticks=1668/101, in_queue=1769, util=98.42% 00:37:16.242 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:16.242 [global] 00:37:16.242 thread=1 00:37:16.242 invalidate=1 00:37:16.242 rw=write 00:37:16.242 time_based=1 00:37:16.242 runtime=1 00:37:16.242 ioengine=libaio 00:37:16.242 direct=1 00:37:16.242 bs=4096 00:37:16.242 iodepth=128 00:37:16.242 norandommap=0 00:37:16.242 numjobs=1 00:37:16.242 00:37:16.242 verify_dump=1 00:37:16.242 verify_backlog=512 00:37:16.242 verify_state_save=0 00:37:16.242 do_verify=1 00:37:16.242 verify=crc32c-intel 00:37:16.242 [job0] 00:37:16.242 filename=/dev/nvme0n1 00:37:16.242 [job1] 00:37:16.242 filename=/dev/nvme0n2 00:37:16.242 [job2] 00:37:16.242 filename=/dev/nvme0n3 00:37:16.242 [job3] 00:37:16.242 filename=/dev/nvme0n4 00:37:16.242 Could not set queue depth (nvme0n1) 00:37:16.242 Could not set queue depth (nvme0n2) 00:37:16.242 Could not set queue depth (nvme0n3) 00:37:16.242 Could not set queue depth (nvme0n4) 00:37:16.500 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.500 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.500 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.500 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:16.500 fio-3.35 00:37:16.500 Starting 4 threads 00:37:17.890 00:37:17.890 job0: (groupid=0, jobs=1): err= 0: pid=578808: Mon Dec 9 05:32:11 2024 00:37:17.890 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:37:17.890 slat (usec): min=2, max=15739, avg=166.09, stdev=1034.57 00:37:17.890 clat (usec): min=6494, max=59792, avg=20132.53, stdev=7029.32 00:37:17.890 lat (usec): min=6499, max=63639, avg=20298.61, stdev=7109.17 00:37:17.890 clat percentiles (usec): 00:37:17.890 | 1.00th=[11076], 5.00th=[13829], 10.00th=[14615], 20.00th=[15008], 00:37:17.890 | 30.00th=[15795], 40.00th=[16450], 50.00th=[17957], 60.00th=[19006], 00:37:17.890 | 70.00th=[19792], 80.00th=[25822], 90.00th=[31327], 95.00th=[34341], 00:37:17.890 | 99.00th=[39584], 99.50th=[49546], 99.90th=[60031], 99.95th=[60031], 00:37:17.890 | 99.99th=[60031] 00:37:17.890 write: IOPS=2673, BW=10.4MiB/s (11.0MB/s)(10.6MiB/1011msec); 0 zone resets 00:37:17.890 slat (usec): min=3, max=25422, avg=206.18, stdev=1188.94 00:37:17.891 clat (usec): min=8487, max=72507, avg=27315.63, stdev=15086.91 00:37:17.891 lat (usec): min=8506, max=72514, avg=27521.81, stdev=15162.83 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[13829], 20.00th=[14615], 00:37:17.891 | 30.00th=[16909], 40.00th=[22152], 50.00th=[23462], 60.00th=[25822], 00:37:17.891 | 70.00th=[29754], 80.00th=[34866], 90.00th=[51643], 95.00th=[61080], 00:37:17.891 | 99.00th=[71828], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:37:17.891 | 99.99th=[72877] 00:37:17.891 bw ( KiB/s): min= 8312, max=12288, per=16.22%, avg=10300.00, stdev=2811.46, samples=2 00:37:17.891 iops : min= 2078, max= 3072, avg=2575.00, stdev=702.86, samples=2 00:37:17.891 lat (msec) : 10=2.24%, 20=49.76%, 50=42.41%, 100=5.59% 00:37:17.891 cpu : usr=1.98%, sys=3.36%, ctx=247, majf=0, minf=1 00:37:17.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:17.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.891 issued rwts: total=2560,2703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.891 job1: (groupid=0, jobs=1): err= 0: pid=578809: Mon Dec 9 05:32:11 2024 00:37:17.891 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1011msec) 00:37:17.891 slat (usec): min=2, max=11179, avg=107.86, stdev=726.67 00:37:17.891 clat (usec): min=4499, max=32562, avg=13344.42, stdev=3509.22 00:37:17.891 lat (usec): min=4505, max=32568, avg=13452.28, stdev=3553.06 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 5669], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11207], 00:37:17.891 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12518], 60.00th=[13304], 00:37:17.891 | 70.00th=[14091], 80.00th=[15008], 90.00th=[18482], 95.00th=[20317], 00:37:17.891 | 99.00th=[24511], 99.50th=[28443], 99.90th=[32637], 99.95th=[32637], 00:37:17.891 | 99.99th=[32637] 00:37:17.891 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:37:17.891 slat (usec): min=3, max=11785, avg=88.47, stdev=388.77 00:37:17.891 clat (usec): min=1280, max=32563, avg=12917.05, stdev=4388.24 00:37:17.891 lat (usec): min=1290, max=32576, avg=13005.53, stdev=4424.47 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 3851], 5.00th=[ 5866], 10.00th=[ 7701], 20.00th=[11207], 00:37:17.891 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:37:17.891 | 70.00th=[12649], 80.00th=[15270], 90.00th=[19268], 95.00th=[22938], 00:37:17.891 | 99.00th=[25822], 99.50th=[26608], 99.90th=[28181], 99.95th=[28181], 00:37:17.891 | 99.99th=[32637] 00:37:17.891 bw ( KiB/s): min=19832, max=20480, per=31.74%, avg=20156.00, stdev=458.21, samples=2 00:37:17.891 iops : min= 4958, max= 5120, avg=5039.00, stdev=114.55, samples=2 00:37:17.891 lat (msec) : 2=0.02%, 4=0.63%, 10=12.02%, 20=79.43%, 50=7.90% 00:37:17.891 cpu : usr=5.84%, sys=8.71%, ctx=637, majf=0, minf=1 00:37:17.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:17.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.891 issued rwts: total=4654,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.891 job2: (groupid=0, jobs=1): err= 0: pid=578810: Mon Dec 9 05:32:11 2024 00:37:17.891 read: IOPS=3639, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1002msec) 00:37:17.891 slat (usec): min=3, max=12172, avg=127.19, stdev=727.33 00:37:17.891 clat (usec): min=855, max=33352, avg=15548.95, stdev=4127.50 00:37:17.891 lat (usec): min=6678, max=33361, avg=15676.15, stdev=4173.92 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[11600], 20.00th=[12649], 00:37:17.891 | 30.00th=[12911], 40.00th=[13042], 50.00th=[14091], 60.00th=[16188], 00:37:17.891 | 70.00th=[17695], 80.00th=[19268], 90.00th=[20317], 95.00th=[22414], 00:37:17.891 | 99.00th=[27395], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:37:17.891 | 99.99th=[33424] 00:37:17.891 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:37:17.891 slat (usec): min=4, max=9273, avg=121.73, stdev=500.49 00:37:17.891 clat (usec): min=7105, max=44033, avg=17057.44, stdev=7234.53 00:37:17.891 lat (usec): min=7116, max=44054, avg=17179.17, stdev=7284.12 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 8586], 5.00th=[11731], 10.00th=[12387], 20.00th=[12780], 00:37:17.891 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13698], 00:37:17.891 | 70.00th=[16057], 80.00th=[22938], 90.00th=[29754], 95.00th=[32375], 00:37:17.891 | 99.00th=[41157], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:37:17.891 | 99.99th=[43779] 00:37:17.891 bw ( KiB/s): min=12312, max=19968, per=25.41%, avg=16140.00, stdev=5413.61, samples=2 00:37:17.891 iops : min= 3078, max= 4992, avg=4035.00, stdev=1353.40, samples=2 00:37:17.891 lat (usec) : 1000=0.01% 00:37:17.891 lat (msec) : 10=3.27%, 20=77.63%, 50=19.09% 00:37:17.891 cpu : usr=4.50%, sys=8.39%, ctx=553, majf=0, minf=1 00:37:17.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:17.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.891 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.891 job3: (groupid=0, jobs=1): err= 0: pid=578811: Mon Dec 9 05:32:11 2024 00:37:17.891 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:37:17.891 slat (usec): min=3, max=13676, avg=120.99, stdev=804.76 00:37:17.891 clat (usec): min=5479, max=50311, avg=14733.76, stdev=5200.03 00:37:17.891 lat (usec): min=5487, max=50322, avg=14854.75, stdev=5263.94 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[10683], 20.00th=[11863], 00:37:17.891 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13566], 60.00th=[14091], 00:37:17.891 | 70.00th=[14353], 80.00th=[16319], 90.00th=[20841], 95.00th=[23987], 00:37:17.891 | 99.00th=[37487], 99.50th=[43254], 99.90th=[50070], 99.95th=[50070], 00:37:17.891 | 99.99th=[50070] 00:37:17.891 write: IOPS=4134, BW=16.2MiB/s (16.9MB/s)(16.4MiB/1015msec); 0 zone resets 00:37:17.891 slat (usec): min=4, max=10368, avg=110.10, stdev=564.06 00:37:17.891 clat (usec): min=3446, max=62803, avg=16246.53, stdev=9834.36 00:37:17.891 lat (usec): min=3457, max=62818, avg=16356.63, stdev=9904.74 00:37:17.891 clat percentiles (usec): 00:37:17.891 | 1.00th=[ 5211], 5.00th=[ 6980], 10.00th=[ 9634], 20.00th=[12256], 00:37:17.891 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:37:17.891 | 70.00th=[14222], 80.00th=[16188], 90.00th=[26346], 95.00th=[42730], 00:37:17.891 | 99.00th=[54789], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:37:17.891 | 99.99th=[62653] 00:37:17.891 bw ( KiB/s): min=12288, max=20480, per=25.80%, avg=16384.00, stdev=5792.62, samples=2 00:37:17.891 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:37:17.891 lat (msec) : 4=0.14%, 10=6.73%, 20=79.22%, 50=12.88%, 100=1.02% 00:37:17.891 cpu : usr=4.14%, sys=9.37%, ctx=463, majf=0, minf=1 00:37:17.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:17.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:17.891 issued rwts: total=4096,4197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:17.891 00:37:17.891 Run status group 0 (all jobs): 00:37:17.891 READ: bw=57.6MiB/s (60.4MB/s), 9.89MiB/s-18.0MiB/s (10.4MB/s-18.9MB/s), io=58.4MiB (61.3MB), run=1002-1015msec 00:37:17.891 WRITE: bw=62.0MiB/s (65.0MB/s), 10.4MiB/s-19.8MiB/s (11.0MB/s-20.7MB/s), io=63.0MiB (66.0MB), run=1002-1015msec 00:37:17.891 00:37:17.891 Disk stats (read/write): 00:37:17.891 nvme0n1: ios=2067/2431, merge=0/0, ticks=17573/23337, in_queue=40910, util=97.49% 00:37:17.891 nvme0n2: ios=3953/4096, merge=0/0, ticks=51305/53342, in_queue=104647, util=86.88% 00:37:17.891 nvme0n3: ios=3130/3231, merge=0/0, ticks=25978/27268, in_queue=53246, util=98.02% 00:37:17.891 nvme0n4: ios=3614/3775, merge=0/0, ticks=48773/54227, in_queue=103000, util=98.11% 00:37:17.891 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:17.891 [global] 00:37:17.891 thread=1 00:37:17.891 invalidate=1 00:37:17.891 rw=randwrite 00:37:17.891 time_based=1 00:37:17.891 runtime=1 00:37:17.891 ioengine=libaio 00:37:17.891 direct=1 00:37:17.891 bs=4096 00:37:17.891 iodepth=128 00:37:17.891 norandommap=0 00:37:17.891 numjobs=1 00:37:17.891 00:37:17.891 verify_dump=1 00:37:17.891 verify_backlog=512 00:37:17.891 verify_state_save=0 00:37:17.891 do_verify=1 00:37:17.891 verify=crc32c-intel 00:37:17.891 [job0] 00:37:17.891 filename=/dev/nvme0n1 00:37:17.891 [job1] 00:37:17.891 filename=/dev/nvme0n2 00:37:17.891 [job2] 00:37:17.891 filename=/dev/nvme0n3 00:37:17.891 [job3] 00:37:17.891 filename=/dev/nvme0n4 00:37:17.891 Could not set queue depth (nvme0n1) 00:37:17.891 Could not set queue depth (nvme0n2) 00:37:17.891 Could not set queue depth (nvme0n3) 00:37:17.891 Could not set queue depth (nvme0n4) 00:37:17.891 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.891 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.891 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.891 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:17.891 fio-3.35 00:37:17.891 Starting 4 threads 00:37:19.263 00:37:19.263 job0: (groupid=0, jobs=1): err= 0: pid=579048: Mon Dec 9 05:32:13 2024 00:37:19.263 read: IOPS=5140, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1006msec) 00:37:19.263 slat (usec): min=2, max=8469, avg=91.01, stdev=551.92 00:37:19.263 clat (usec): min=4901, max=20144, avg=11396.52, stdev=1596.02 00:37:19.263 lat (usec): min=4908, max=20178, avg=11487.53, stdev=1658.49 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10814], 00:37:19.263 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:37:19.263 | 70.00th=[11600], 80.00th=[11863], 90.00th=[13304], 95.00th=[15008], 00:37:19.263 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16581], 99.95th=[19792], 00:37:19.263 | 99.99th=[20055] 00:37:19.263 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:37:19.263 slat (usec): min=3, max=27256, avg=83.83, stdev=489.74 00:37:19.263 clat (usec): min=5090, max=41811, avg=12118.80, stdev=4089.74 00:37:19.263 lat (usec): min=5758, max=41818, avg=12202.64, stdev=4108.72 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[ 5932], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10814], 00:37:19.263 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:37:19.263 | 70.00th=[12125], 80.00th=[12256], 90.00th=[13829], 95.00th=[15533], 00:37:19.263 | 99.00th=[39584], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:37:19.263 | 99.99th=[41681] 00:37:19.263 bw ( KiB/s): min=21240, max=23208, per=35.53%, avg=22224.00, stdev=1391.59, samples=2 00:37:19.263 iops : min= 5310, max= 5802, avg=5556.00, stdev=347.90, samples=2 00:37:19.263 lat (msec) : 10=13.34%, 20=85.47%, 50=1.19% 00:37:19.263 cpu : usr=6.37%, sys=10.75%, ctx=661, majf=0, minf=2 00:37:19.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:19.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.263 issued rwts: total=5171,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.263 job1: (groupid=0, jobs=1): err= 0: pid=579049: Mon Dec 9 05:32:13 2024 00:37:19.263 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:37:19.263 slat (usec): min=2, max=26164, avg=228.98, stdev=1608.54 00:37:19.263 clat (usec): min=5171, max=78056, avg=28577.56, stdev=20081.23 00:37:19.263 lat (usec): min=5176, max=78062, avg=28806.54, stdev=20220.84 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[ 7963], 5.00th=[11600], 10.00th=[11994], 20.00th=[12649], 00:37:19.263 | 30.00th=[12780], 40.00th=[14222], 50.00th=[18744], 60.00th=[25297], 00:37:19.263 | 70.00th=[36963], 80.00th=[44827], 90.00th=[62129], 95.00th=[69731], 00:37:19.263 | 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:37:19.263 | 99.99th=[78119] 00:37:19.263 write: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1008msec); 0 zone resets 00:37:19.263 slat (usec): min=2, max=15663, avg=143.32, stdev=909.56 00:37:19.263 clat (usec): min=3432, max=57614, avg=20582.36, stdev=10636.07 00:37:19.263 lat (usec): min=3436, max=57855, avg=20725.68, stdev=10704.63 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11863], 00:37:19.263 | 30.00th=[12649], 40.00th=[15664], 50.00th=[17433], 60.00th=[19006], 00:37:19.263 | 70.00th=[23987], 80.00th=[28705], 90.00th=[36963], 95.00th=[43779], 00:37:19.263 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:37:19.263 | 99.99th=[57410] 00:37:19.263 bw ( KiB/s): min= 8200, max=12280, per=16.37%, avg=10240.00, stdev=2885.00, samples=2 00:37:19.263 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:37:19.263 lat (msec) : 4=0.11%, 10=4.73%, 20=53.26%, 50=32.85%, 100=9.05% 00:37:19.263 cpu : usr=2.88%, sys=3.28%, ctx=201, majf=0, minf=1 00:37:19.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:19.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.263 issued rwts: total=2560,2658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.263 job2: (groupid=0, jobs=1): err= 0: pid=579050: Mon Dec 9 05:32:13 2024 00:37:19.263 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:37:19.263 slat (usec): min=3, max=16546, avg=156.56, stdev=930.09 00:37:19.263 clat (usec): min=10786, max=47922, avg=18159.85, stdev=4009.24 00:37:19.263 lat (usec): min=10800, max=47928, avg=18316.42, stdev=4124.44 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[12125], 5.00th=[14091], 10.00th=[14615], 20.00th=[15533], 00:37:19.263 | 30.00th=[15795], 40.00th=[16581], 50.00th=[17433], 60.00th=[17957], 00:37:19.263 | 70.00th=[18482], 80.00th=[19530], 90.00th=[22938], 95.00th=[24773], 00:37:19.263 | 99.00th=[35390], 99.50th=[38536], 99.90th=[47973], 99.95th=[47973], 00:37:19.263 | 99.99th=[47973] 00:37:19.263 write: IOPS=2686, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1009msec); 0 zone resets 00:37:19.263 slat (usec): min=4, max=26120, avg=211.36, stdev=1253.40 00:37:19.263 clat (msec): min=7, max=124, avg=29.38, stdev=22.33 00:37:19.263 lat (msec): min=8, max=124, avg=29.60, stdev=22.47 00:37:19.263 clat percentiles (msec): 00:37:19.263 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 17], 00:37:19.263 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 23], 60.00th=[ 26], 00:37:19.263 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 54], 95.00th=[ 79], 00:37:19.263 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:37:19.263 | 99.99th=[ 125] 00:37:19.263 bw ( KiB/s): min= 9544, max=11128, per=16.52%, avg=10336.00, stdev=1120.06, samples=2 00:37:19.263 iops : min= 2386, max= 2782, avg=2584.00, stdev=280.01, samples=2 00:37:19.263 lat (msec) : 10=0.28%, 20=63.23%, 50=29.58%, 100=4.95%, 250=1.95% 00:37:19.263 cpu : usr=3.37%, sys=5.46%, ctx=242, majf=0, minf=1 00:37:19.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:19.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.263 issued rwts: total=2560,2711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.263 job3: (groupid=0, jobs=1): err= 0: pid=579051: Mon Dec 9 05:32:13 2024 00:37:19.263 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:37:19.263 slat (usec): min=3, max=4254, avg=100.48, stdev=522.43 00:37:19.263 clat (usec): min=9451, max=17568, avg=13065.83, stdev=1244.16 00:37:19.263 lat (usec): min=9464, max=17906, avg=13166.30, stdev=1273.74 00:37:19.263 clat percentiles (usec): 00:37:19.263 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11338], 20.00th=[11994], 00:37:19.263 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:37:19.263 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14484], 95.00th=[14877], 00:37:19.263 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:37:19.263 | 99.99th=[17695] 00:37:19.263 write: IOPS=4758, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1004msec); 0 zone resets 00:37:19.264 slat (usec): min=3, max=25652, avg=102.89, stdev=657.94 00:37:19.264 clat (usec): min=3663, max=53793, avg=13937.51, stdev=5781.89 00:37:19.264 lat (usec): min=4392, max=53797, avg=14040.40, stdev=5807.65 00:37:19.264 clat percentiles (usec): 00:37:19.264 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[11207], 20.00th=[12387], 00:37:19.264 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:37:19.264 | 70.00th=[13435], 80.00th=[13698], 90.00th=[15270], 95.00th=[17171], 00:37:19.264 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:37:19.264 | 99.99th=[53740] 00:37:19.264 bw ( KiB/s): min=18312, max=18896, per=29.74%, avg=18604.00, stdev=412.95, samples=2 00:37:19.264 iops : min= 4578, max= 4724, avg=4651.00, stdev=103.24, samples=2 00:37:19.264 lat (msec) : 4=0.01%, 10=3.09%, 20=95.24%, 50=1.04%, 100=0.62% 00:37:19.264 cpu : usr=5.68%, sys=8.96%, ctx=497, majf=0, minf=1 00:37:19.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:19.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.264 issued rwts: total=4608,4778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.264 00:37:19.264 Run status group 0 (all jobs): 00:37:19.264 READ: bw=57.7MiB/s (60.5MB/s), 9.91MiB/s-20.1MiB/s (10.4MB/s-21.1MB/s), io=58.2MiB (61.0MB), run=1004-1009msec 00:37:19.264 WRITE: bw=61.1MiB/s (64.1MB/s), 10.3MiB/s-21.9MiB/s (10.8MB/s-22.9MB/s), io=61.6MiB (64.6MB), run=1004-1009msec 00:37:19.264 00:37:19.264 Disk stats (read/write): 00:37:19.264 nvme0n1: ios=4562/4608, merge=0/0, ticks=25590/26377, in_queue=51967, util=98.00% 00:37:19.264 nvme0n2: ios=2113/2560, merge=0/0, ticks=28566/32895, in_queue=61461, util=87.20% 00:37:19.264 nvme0n3: ios=2104/2103, merge=0/0, ticks=19750/33201, in_queue=52951, util=98.33% 00:37:19.264 nvme0n4: ios=3830/4096, merge=0/0, ticks=16135/18292, in_queue=34427, util=89.70% 00:37:19.264 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:19.264 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=579187 00:37:19.264 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:19.264 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:19.264 [global] 00:37:19.264 thread=1 00:37:19.264 invalidate=1 00:37:19.264 rw=read 00:37:19.264 time_based=1 00:37:19.264 runtime=10 00:37:19.264 ioengine=libaio 00:37:19.264 direct=1 00:37:19.264 bs=4096 00:37:19.264 iodepth=1 00:37:19.264 norandommap=1 00:37:19.264 numjobs=1 00:37:19.264 00:37:19.264 [job0] 00:37:19.264 filename=/dev/nvme0n1 00:37:19.264 [job1] 00:37:19.264 filename=/dev/nvme0n2 00:37:19.264 [job2] 00:37:19.264 filename=/dev/nvme0n3 00:37:19.264 [job3] 00:37:19.264 filename=/dev/nvme0n4 00:37:19.264 Could not set queue depth (nvme0n1) 00:37:19.264 Could not set queue depth (nvme0n2) 00:37:19.264 Could not set queue depth (nvme0n3) 00:37:19.264 Could not set queue depth (nvme0n4) 00:37:19.264 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.264 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.264 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.264 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:19.264 fio-3.35 00:37:19.264 Starting 4 threads 00:37:22.539 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:22.539 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:22.539 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16359424, buflen=4096 00:37:22.539 fio: pid=579396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:22.539 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:22.539 05:32:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:22.797 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11866112, buflen=4096 00:37:22.797 fio: pid=579395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:23.055 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46239744, buflen=4096 00:37:23.055 fio: pid=579369, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:23.055 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.055 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:23.313 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.313 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:23.313 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61755392, buflen=4096 00:37:23.313 fio: pid=579389, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:23.313 00:37:23.313 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=579369: Mon Dec 9 05:32:17 2024 00:37:23.313 read: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(44.1MiB/3490msec) 00:37:23.313 slat (usec): min=3, max=12678, avg=13.75, stdev=186.98 00:37:23.313 clat (usec): min=165, max=42017, avg=292.05, stdev=1627.02 00:37:23.313 lat (usec): min=169, max=50988, avg=305.79, stdev=1660.01 00:37:23.313 clat percentiles (usec): 00:37:23.313 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:37:23.313 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:37:23.313 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 289], 95.00th=[ 338], 00:37:23.313 | 99.00th=[ 429], 99.50th=[ 465], 99.90th=[40633], 99.95th=[41157], 00:37:23.313 | 99.99th=[41681] 00:37:23.313 bw ( KiB/s): min= 1104, max=17128, per=38.88%, avg=13504.00, stdev=6166.75, samples=6 00:37:23.313 iops : min= 276, max= 4282, avg=3376.00, stdev=1541.69, samples=6 00:37:23.313 lat (usec) : 250=84.61%, 500=15.06%, 750=0.10%, 1000=0.01% 00:37:23.313 lat (msec) : 2=0.03%, 4=0.02%, 10=0.02%, 50=0.16% 00:37:23.313 cpu : usr=1.26%, sys=4.39%, ctx=11293, majf=0, minf=2 00:37:23.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.313 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.313 issued rwts: total=11290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:23.313 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=579389: Mon Dec 9 05:32:17 2024 00:37:23.313 read: IOPS=3936, BW=15.4MiB/s (16.1MB/s)(58.9MiB/3830msec) 00:37:23.313 slat (usec): min=3, max=15423, avg=14.50, stdev=235.85 00:37:23.313 clat (usec): min=163, max=41021, avg=235.19, stdev=474.79 00:37:23.313 lat (usec): min=168, max=41032, avg=249.69, stdev=530.46 00:37:23.313 clat percentiles (usec): 00:37:23.313 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:37:23.313 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:37:23.313 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 281], 00:37:23.313 | 99.00th=[ 429], 99.50th=[ 482], 99.90th=[ 857], 99.95th=[ 1029], 00:37:23.313 | 99.99th=[40633] 00:37:23.313 bw ( KiB/s): min=14128, max=17720, per=46.30%, avg=16082.29, stdev=1161.67, samples=7 00:37:23.313 iops : min= 3532, max= 4430, avg=4020.57, stdev=290.42, samples=7 00:37:23.313 lat (usec) : 250=79.80%, 500=19.89%, 750=0.19%, 1000=0.05% 00:37:23.313 lat (msec) : 2=0.05%, 10=0.01%, 50=0.01% 00:37:23.313 cpu : usr=2.40%, sys=5.12%, ctx=15084, majf=0, minf=1 00:37:23.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 issued rwts: total=15078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:23.314 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=579395: Mon Dec 9 05:32:17 2024 00:37:23.314 read: IOPS=901, BW=3603KiB/s (3690kB/s)(11.3MiB/3216msec) 00:37:23.314 slat (nsec): min=5909, max=53674, avg=17285.57, stdev=7014.62 00:37:23.314 clat (usec): min=197, max=42168, avg=1080.85, stdev=5565.70 00:37:23.314 lat (usec): min=206, max=42216, avg=1098.14, stdev=5565.93 00:37:23.314 clat percentiles (usec): 00:37:23.314 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 241], 00:37:23.314 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 322], 00:37:23.314 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 433], 95.00th=[ 469], 00:37:23.314 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:37:23.314 | 99.99th=[42206] 00:37:23.314 bw ( KiB/s): min= 208, max=12648, per=11.10%, avg=3856.00, stdev=4702.13, samples=6 00:37:23.314 iops : min= 52, max= 3162, avg=964.00, stdev=1175.53, samples=6 00:37:23.314 lat (usec) : 250=30.64%, 500=66.74%, 750=0.59% 00:37:23.314 lat (msec) : 10=0.10%, 50=1.90% 00:37:23.314 cpu : usr=0.84%, sys=2.02%, ctx=2900, majf=0, minf=2 00:37:23.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 issued rwts: total=2898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:23.314 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=579396: Mon Dec 9 05:32:17 2024 00:37:23.314 read: IOPS=1359, BW=5438KiB/s (5568kB/s)(15.6MiB/2938msec) 00:37:23.314 slat (nsec): min=5540, max=53377, avg=12591.83, stdev=5919.09 00:37:23.314 clat (usec): min=188, max=42000, avg=714.04, stdev=4298.60 00:37:23.314 lat (usec): min=198, max=42035, avg=726.63, stdev=4299.57 00:37:23.314 clat percentiles (usec): 00:37:23.314 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:37:23.314 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:37:23.314 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 330], 00:37:23.314 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:23.314 | 99.99th=[42206] 00:37:23.314 bw ( KiB/s): min= 112, max=14728, per=18.35%, avg=6372.80, stdev=7210.37, samples=5 00:37:23.314 iops : min= 28, max= 3682, avg=1593.20, stdev=1802.59, samples=5 00:37:23.314 lat (usec) : 250=56.65%, 500=42.08%, 750=0.13% 00:37:23.314 lat (msec) : 50=1.13% 00:37:23.314 cpu : usr=0.99%, sys=2.66%, ctx=3996, majf=0, minf=2 00:37:23.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.314 issued rwts: total=3995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:23.314 00:37:23.314 Run status group 0 (all jobs): 00:37:23.314 READ: bw=33.9MiB/s (35.6MB/s), 3603KiB/s-15.4MiB/s (3690kB/s-16.1MB/s), io=130MiB (136MB), run=2938-3830msec 00:37:23.314 00:37:23.314 Disk stats (read/write): 00:37:23.314 nvme0n1: ios=10697/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.16% 00:37:23.314 nvme0n2: ios=14472/0, merge=0/0, ticks=3179/0, in_queue=3179, util=94.80% 00:37:23.314 nvme0n3: ios=2938/0, merge=0/0, ticks=3593/0, in_queue=3593, util=100.00% 00:37:23.314 nvme0n4: ios=4038/0, merge=0/0, ticks=3037/0, in_queue=3037, util=100.00% 00:37:23.572 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.572 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:23.829 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:23.829 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:24.087 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:24.087 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:24.345 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:24.345 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:24.602 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:24.602 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 579187 00:37:24.602 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:24.602 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:24.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:24.860 nvmf hotplug test: fio failed as expected 00:37:24.860 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:25.118 rmmod nvme_tcp 00:37:25.118 rmmod nvme_fabrics 00:37:25.118 rmmod nvme_keyring 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 577176 ']' 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 577176 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 577176 ']' 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 577176 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577176 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577176' 00:37:25.118 killing process with pid 577176 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 577176 00:37:25.118 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 577176 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:25.377 05:32:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:27.912 00:37:27.912 real 0m24.110s 00:37:27.912 user 1m25.000s 00:37:27.912 sys 0m7.086s 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:27.912 ************************************ 00:37:27.912 END TEST nvmf_fio_target 00:37:27.912 ************************************ 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:37:27.912 ************************************ 00:37:27.912 START TEST nvmf_bdevio 00:37:27.912 ************************************ 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:37:27.912 * Looking for test storage... 00:37:27.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.912 --rc genhtml_branch_coverage=1 00:37:27.912 --rc genhtml_function_coverage=1 00:37:27.912 --rc genhtml_legend=1 00:37:27.912 --rc geninfo_all_blocks=1 00:37:27.912 --rc geninfo_unexecuted_blocks=1 00:37:27.912 00:37:27.912 ' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.912 --rc genhtml_branch_coverage=1 00:37:27.912 --rc genhtml_function_coverage=1 00:37:27.912 --rc genhtml_legend=1 00:37:27.912 --rc geninfo_all_blocks=1 00:37:27.912 --rc geninfo_unexecuted_blocks=1 00:37:27.912 00:37:27.912 ' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.912 --rc genhtml_branch_coverage=1 00:37:27.912 --rc genhtml_function_coverage=1 00:37:27.912 --rc genhtml_legend=1 00:37:27.912 --rc geninfo_all_blocks=1 00:37:27.912 --rc geninfo_unexecuted_blocks=1 00:37:27.912 00:37:27.912 ' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:27.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.912 --rc genhtml_branch_coverage=1 00:37:27.912 --rc genhtml_function_coverage=1 00:37:27.912 --rc genhtml_legend=1 00:37:27.912 --rc geninfo_all_blocks=1 00:37:27.912 --rc geninfo_unexecuted_blocks=1 00:37:27.912 00:37:27.912 ' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:27.912 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:27.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:27.913 05:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.889 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:29.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:29.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:29.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:29.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.889 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:30.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:37:30.220 00:37:30.220 --- 10.0.0.2 ping statistics --- 00:37:30.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.220 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:37:30.220 00:37:30.220 --- 10.0.0.1 ping statistics --- 00:37:30.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.220 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=582044 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 582044 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 582044 ']' 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.220 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.221 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.221 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.221 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.221 [2024-12-09 05:32:24.310470] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:37:30.221 [2024-12-09 05:32:24.310567] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.221 [2024-12-09 05:32:24.385796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:30.221 [2024-12-09 05:32:24.442206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.221 [2024-12-09 05:32:24.442259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.221 [2024-12-09 05:32:24.442294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.221 [2024-12-09 05:32:24.442307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.221 [2024-12-09 05:32:24.442317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.221 [2024-12-09 05:32:24.444019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:30.221 [2024-12-09 05:32:24.444080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:30.221 [2024-12-09 05:32:24.444146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:30.221 [2024-12-09 05:32:24.444149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 [2024-12-09 05:32:24.592696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 Malloc0 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.478 [2024-12-09 05:32:24.653321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:30.478 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:30.479 { 00:37:30.479 "params": { 00:37:30.479 "name": "Nvme$subsystem", 00:37:30.479 "trtype": "$TEST_TRANSPORT", 00:37:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.479 "adrfam": "ipv4", 00:37:30.479 "trsvcid": "$NVMF_PORT", 00:37:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.479 "hdgst": ${hdgst:-false}, 00:37:30.479 "ddgst": ${ddgst:-false} 00:37:30.479 }, 00:37:30.479 "method": "bdev_nvme_attach_controller" 00:37:30.479 } 00:37:30.479 EOF 00:37:30.479 )") 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:30.479 05:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:30.479 "params": { 00:37:30.479 "name": "Nvme1", 00:37:30.479 "trtype": "tcp", 00:37:30.479 "traddr": "10.0.0.2", 00:37:30.479 "adrfam": "ipv4", 00:37:30.479 "trsvcid": "4420", 00:37:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:30.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:30.479 "hdgst": false, 00:37:30.479 "ddgst": false 00:37:30.479 }, 00:37:30.479 "method": "bdev_nvme_attach_controller" 00:37:30.479 }' 00:37:30.736 [2024-12-09 05:32:24.706402] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:37:30.736 [2024-12-09 05:32:24.706477] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582074 ] 00:37:30.736 [2024-12-09 05:32:24.779493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:30.736 [2024-12-09 05:32:24.844231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.736 [2024-12-09 05:32:24.844288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:30.736 [2024-12-09 05:32:24.844293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.993 I/O targets: 00:37:30.993 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:30.993 00:37:30.993 00:37:30.993 CUnit - A unit testing framework for C - Version 2.1-3 00:37:30.993 http://cunit.sourceforge.net/ 00:37:30.993 00:37:30.993 00:37:30.993 Suite: bdevio tests on: Nvme1n1 00:37:30.993 Test: blockdev write read block ...passed 00:37:31.253 Test: blockdev write zeroes read block ...passed 00:37:31.253 Test: blockdev write zeroes read no split ...passed 00:37:31.253 Test: blockdev write zeroes read split ...passed 00:37:31.253 Test: blockdev write zeroes read split partial ...passed 00:37:31.253 Test: blockdev reset ...[2024-12-09 05:32:25.280154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:31.253 [2024-12-09 05:32:25.280286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a1cb0 (9): Bad file descriptor 00:37:31.253 [2024-12-09 05:32:25.335382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:31.253 passed 00:37:31.253 Test: blockdev write read 8 blocks ...passed 00:37:31.253 Test: blockdev write read size > 128k ...passed 00:37:31.253 Test: blockdev write read invalid size ...passed 00:37:31.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:31.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:31.253 Test: blockdev write read max offset ...passed 00:37:31.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:31.253 Test: blockdev writev readv 8 blocks ...passed 00:37:31.253 Test: blockdev writev readv 30 x 1block ...passed 00:37:31.509 Test: blockdev writev readv block ...passed 00:37:31.509 Test: blockdev writev readv size > 128k ...passed 00:37:31.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:31.509 Test: blockdev comparev and writev ...[2024-12-09 05:32:25.508430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.508468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.508501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.508519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.508843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.508869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.508892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.508909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.509218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.509284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.509304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.509630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.509654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.509676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:31.509 [2024-12-09 05:32:25.509692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:31.509 passed 00:37:31.509 Test: blockdev nvme passthru rw ...passed 00:37:31.509 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:32:25.591543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:31.509 [2024-12-09 05:32:25.591570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.591723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:31.509 [2024-12-09 05:32:25.591746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.591902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:31.509 [2024-12-09 05:32:25.591926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:31.509 [2024-12-09 05:32:25.592075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:31.509 [2024-12-09 05:32:25.592098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:31.509 passed 00:37:31.509 Test: blockdev nvme admin passthru ...passed 00:37:31.509 Test: blockdev copy ...passed 00:37:31.509 00:37:31.509 Run Summary: Type Total Ran Passed Failed Inactive 00:37:31.509 suites 1 1 n/a 0 0 00:37:31.509 tests 23 23 23 0 0 00:37:31.509 asserts 152 152 152 0 n/a 00:37:31.509 00:37:31.509 Elapsed time = 0.988 seconds 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.766 rmmod nvme_tcp 00:37:31.766 rmmod nvme_fabrics 00:37:31.766 rmmod nvme_keyring 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 582044 ']' 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 582044 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 582044 ']' 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 582044 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582044 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582044' 00:37:31.766 killing process with pid 582044 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 582044 00:37:31.766 05:32:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 582044 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:32.333 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:34.239 00:37:34.239 real 0m6.675s 00:37:34.239 user 0m10.370s 00:37:34.239 sys 0m2.282s 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:34.239 ************************************ 00:37:34.239 END TEST nvmf_bdevio 00:37:34.239 ************************************ 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:34.239 00:37:34.239 real 3m58.200s 00:37:34.239 user 10m19.556s 00:37:34.239 sys 1m8.262s 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:37:34.239 ************************************ 00:37:34.239 END TEST nvmf_target_core 00:37:34.239 ************************************ 00:37:34.239 05:32:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:37:34.239 05:32:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:34.239 05:32:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.239 05:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:34.239 ************************************ 00:37:34.239 START TEST nvmf_target_extra 00:37:34.239 ************************************ 00:37:34.239 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:37:34.501 * Looking for test storage... 00:37:34.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.501 --rc genhtml_branch_coverage=1 00:37:34.501 --rc genhtml_function_coverage=1 00:37:34.501 --rc genhtml_legend=1 00:37:34.501 --rc geninfo_all_blocks=1 00:37:34.501 --rc geninfo_unexecuted_blocks=1 00:37:34.501 00:37:34.501 ' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.501 --rc genhtml_branch_coverage=1 00:37:34.501 --rc genhtml_function_coverage=1 00:37:34.501 --rc genhtml_legend=1 00:37:34.501 --rc geninfo_all_blocks=1 00:37:34.501 --rc geninfo_unexecuted_blocks=1 00:37:34.501 00:37:34.501 ' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.501 --rc genhtml_branch_coverage=1 00:37:34.501 --rc genhtml_function_coverage=1 00:37:34.501 --rc genhtml_legend=1 00:37:34.501 --rc geninfo_all_blocks=1 00:37:34.501 --rc geninfo_unexecuted_blocks=1 00:37:34.501 00:37:34.501 ' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:34.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.501 --rc genhtml_branch_coverage=1 00:37:34.501 --rc genhtml_function_coverage=1 00:37:34.501 --rc genhtml_legend=1 00:37:34.501 --rc geninfo_all_blocks=1 00:37:34.501 --rc geninfo_unexecuted_blocks=1 00:37:34.501 00:37:34.501 ' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.501 05:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:34.501 ************************************ 00:37:34.501 START TEST nvmf_example 00:37:34.501 ************************************ 00:37:34.502 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:37:34.502 * Looking for test storage... 00:37:34.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:34.502 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.502 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.502 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.763 --rc genhtml_branch_coverage=1 00:37:34.763 --rc genhtml_function_coverage=1 00:37:34.763 --rc genhtml_legend=1 00:37:34.763 --rc geninfo_all_blocks=1 00:37:34.763 --rc geninfo_unexecuted_blocks=1 00:37:34.763 00:37:34.763 ' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.763 --rc genhtml_branch_coverage=1 00:37:34.763 --rc genhtml_function_coverage=1 00:37:34.763 --rc genhtml_legend=1 00:37:34.763 --rc geninfo_all_blocks=1 00:37:34.763 --rc geninfo_unexecuted_blocks=1 00:37:34.763 00:37:34.763 ' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.763 --rc genhtml_branch_coverage=1 00:37:34.763 --rc genhtml_function_coverage=1 00:37:34.763 --rc genhtml_legend=1 00:37:34.763 --rc geninfo_all_blocks=1 00:37:34.763 --rc geninfo_unexecuted_blocks=1 00:37:34.763 00:37:34.763 ' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:34.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.763 --rc genhtml_branch_coverage=1 00:37:34.763 --rc genhtml_function_coverage=1 00:37:34.763 --rc genhtml_legend=1 00:37:34.763 --rc geninfo_all_blocks=1 00:37:34.763 --rc geninfo_unexecuted_blocks=1 00:37:34.763 00:37:34.763 ' 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.763 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:37:34.764 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:36.669 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:36.669 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:36.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.669 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:36.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.670 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.929 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.930 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:36.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:36.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:37:36.930 00:37:36.930 --- 10.0.0.2 ping statistics --- 00:37:36.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.930 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:36.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:36.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:37:36.930 00:37:36.930 --- 10.0.0.1 ping statistics --- 00:37:36.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.930 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=584335 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 584335 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 584335 ']' 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.930 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.300 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:38.301 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.275 Initializing NVMe Controllers 00:37:48.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:48.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:48.275 Initialization complete. Launching workers. 00:37:48.275 ======================================================== 00:37:48.275 Latency(us) 00:37:48.275 Device Information : IOPS MiB/s Average min max 00:37:48.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14928.53 58.31 4288.93 886.09 15637.60 00:37:48.275 ======================================================== 00:37:48.275 Total : 14928.53 58.31 4288.93 886.09 15637.60 00:37:48.275 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.533 rmmod nvme_tcp 00:37:48.533 rmmod nvme_fabrics 00:37:48.533 rmmod nvme_keyring 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 584335 ']' 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 584335 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 584335 ']' 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 584335 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584335 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584335' 00:37:48.533 killing process with pid 584335 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 584335 00:37:48.533 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 584335 00:37:48.792 nvmf threads initialize successfully 00:37:48.792 bdev subsystem init successfully 00:37:48.792 created a nvmf target service 00:37:48.792 create targets's poll groups done 00:37:48.792 all subsystems of target started 00:37:48.792 nvmf target is running 00:37:48.792 all subsystems of target stopped 00:37:48.792 destroy targets's poll groups done 00:37:48.792 destroyed the nvmf target service 00:37:48.792 bdev subsystem finish successfully 00:37:48.792 nvmf threads destroy successfully 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.792 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:51.332 00:37:51.332 real 0m16.369s 00:37:51.332 user 0m46.033s 00:37:51.332 sys 0m3.430s 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.332 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:37:51.332 ************************************ 00:37:51.332 END TEST nvmf_example 00:37:51.332 ************************************ 00:37:51.332 05:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:37:51.332 05:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:51.332 05:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.332 05:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:51.332 ************************************ 00:37:51.332 START TEST nvmf_filesystem 00:37:51.332 ************************************ 00:37:51.332 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:37:51.332 * Looking for test storage... 00:37:51.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:51.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.333 --rc genhtml_branch_coverage=1 00:37:51.333 --rc genhtml_function_coverage=1 00:37:51.333 --rc genhtml_legend=1 00:37:51.333 --rc geninfo_all_blocks=1 00:37:51.333 --rc geninfo_unexecuted_blocks=1 00:37:51.333 00:37:51.333 ' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:51.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.333 --rc genhtml_branch_coverage=1 00:37:51.333 --rc genhtml_function_coverage=1 00:37:51.333 --rc genhtml_legend=1 00:37:51.333 --rc geninfo_all_blocks=1 00:37:51.333 --rc geninfo_unexecuted_blocks=1 00:37:51.333 00:37:51.333 ' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:51.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.333 --rc genhtml_branch_coverage=1 00:37:51.333 --rc genhtml_function_coverage=1 00:37:51.333 --rc genhtml_legend=1 00:37:51.333 --rc geninfo_all_blocks=1 00:37:51.333 --rc geninfo_unexecuted_blocks=1 00:37:51.333 00:37:51.333 ' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:51.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.333 --rc genhtml_branch_coverage=1 00:37:51.333 --rc genhtml_function_coverage=1 00:37:51.333 --rc genhtml_legend=1 00:37:51.333 --rc geninfo_all_blocks=1 00:37:51.333 --rc geninfo_unexecuted_blocks=1 00:37:51.333 00:37:51.333 ' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:37:51.333 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:37:51.334 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:37:51.334 #define SPDK_CONFIG_H 00:37:51.334 #define SPDK_CONFIG_AIO_FSDEV 1 00:37:51.334 #define SPDK_CONFIG_APPS 1 00:37:51.334 #define SPDK_CONFIG_ARCH native 00:37:51.334 #undef SPDK_CONFIG_ASAN 00:37:51.334 #undef SPDK_CONFIG_AVAHI 00:37:51.334 #undef SPDK_CONFIG_CET 00:37:51.334 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:37:51.334 #define SPDK_CONFIG_COVERAGE 1 00:37:51.334 #define SPDK_CONFIG_CROSS_PREFIX 00:37:51.334 #undef SPDK_CONFIG_CRYPTO 00:37:51.334 #undef SPDK_CONFIG_CRYPTO_MLX5 00:37:51.334 #undef SPDK_CONFIG_CUSTOMOCF 00:37:51.334 #undef SPDK_CONFIG_DAOS 00:37:51.334 #define SPDK_CONFIG_DAOS_DIR 00:37:51.334 #define SPDK_CONFIG_DEBUG 1 00:37:51.334 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:37:51.334 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:37:51.334 #define SPDK_CONFIG_DPDK_INC_DIR 00:37:51.334 #define SPDK_CONFIG_DPDK_LIB_DIR 00:37:51.334 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:37:51.334 #undef SPDK_CONFIG_DPDK_UADK 00:37:51.334 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:37:51.334 #define SPDK_CONFIG_EXAMPLES 1 00:37:51.334 #undef SPDK_CONFIG_FC 00:37:51.334 #define SPDK_CONFIG_FC_PATH 00:37:51.334 #define SPDK_CONFIG_FIO_PLUGIN 1 00:37:51.334 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:37:51.334 #define SPDK_CONFIG_FSDEV 1 00:37:51.334 #undef SPDK_CONFIG_FUSE 00:37:51.334 #undef SPDK_CONFIG_FUZZER 00:37:51.334 #define SPDK_CONFIG_FUZZER_LIB 00:37:51.334 #undef SPDK_CONFIG_GOLANG 00:37:51.334 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:37:51.334 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:37:51.334 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:37:51.334 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:37:51.334 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:37:51.334 #undef SPDK_CONFIG_HAVE_LIBBSD 00:37:51.334 #undef SPDK_CONFIG_HAVE_LZ4 00:37:51.334 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:37:51.334 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:37:51.334 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:37:51.334 #define SPDK_CONFIG_IDXD 1 00:37:51.334 #define SPDK_CONFIG_IDXD_KERNEL 1 00:37:51.334 #undef SPDK_CONFIG_IPSEC_MB 00:37:51.334 #define SPDK_CONFIG_IPSEC_MB_DIR 00:37:51.334 #define SPDK_CONFIG_ISAL 1 00:37:51.334 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:37:51.334 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:37:51.334 #define SPDK_CONFIG_LIBDIR 00:37:51.334 #undef SPDK_CONFIG_LTO 00:37:51.334 #define SPDK_CONFIG_MAX_LCORES 128 00:37:51.334 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:37:51.334 #define SPDK_CONFIG_NVME_CUSE 1 00:37:51.334 #undef SPDK_CONFIG_OCF 00:37:51.334 #define SPDK_CONFIG_OCF_PATH 00:37:51.334 #define SPDK_CONFIG_OPENSSL_PATH 00:37:51.334 #undef SPDK_CONFIG_PGO_CAPTURE 00:37:51.334 #define SPDK_CONFIG_PGO_DIR 00:37:51.334 #undef SPDK_CONFIG_PGO_USE 00:37:51.334 #define SPDK_CONFIG_PREFIX /usr/local 00:37:51.334 #undef SPDK_CONFIG_RAID5F 00:37:51.334 #undef SPDK_CONFIG_RBD 00:37:51.334 #define SPDK_CONFIG_RDMA 1 00:37:51.334 #define SPDK_CONFIG_RDMA_PROV verbs 00:37:51.334 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:37:51.334 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:37:51.334 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:37:51.334 #define SPDK_CONFIG_SHARED 1 00:37:51.334 #undef SPDK_CONFIG_SMA 00:37:51.334 #define SPDK_CONFIG_TESTS 1 00:37:51.334 #undef SPDK_CONFIG_TSAN 00:37:51.334 #define SPDK_CONFIG_UBLK 1 00:37:51.334 #define SPDK_CONFIG_UBSAN 1 00:37:51.334 #undef SPDK_CONFIG_UNIT_TESTS 00:37:51.334 #undef SPDK_CONFIG_URING 00:37:51.334 #define SPDK_CONFIG_URING_PATH 00:37:51.334 #undef SPDK_CONFIG_URING_ZNS 00:37:51.334 #undef SPDK_CONFIG_USDT 00:37:51.334 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:37:51.334 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:37:51.334 #define SPDK_CONFIG_VFIO_USER 1 00:37:51.334 #define SPDK_CONFIG_VFIO_USER_DIR 00:37:51.334 #define SPDK_CONFIG_VHOST 1 00:37:51.334 #define SPDK_CONFIG_VIRTIO 1 00:37:51.334 #undef SPDK_CONFIG_VTUNE 00:37:51.334 #define SPDK_CONFIG_VTUNE_DIR 00:37:51.334 #define SPDK_CONFIG_WERROR 1 00:37:51.335 #define SPDK_CONFIG_WPDK_DIR 00:37:51.335 #undef SPDK_CONFIG_XNVME 00:37:51.335 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:37:51.335 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:37:51.336 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 586040 ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 586040 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vH8i3k 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vH8i3k/tests/target /tmp/spdk.vH8i3k 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56143241216 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5845286912 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:37:51.337 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993924096 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=339968 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:37:51.338 * Looking for test storage... 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56143241216 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8059879424 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.338 --rc genhtml_branch_coverage=1 00:37:51.338 --rc genhtml_function_coverage=1 00:37:51.338 --rc genhtml_legend=1 00:37:51.338 --rc geninfo_all_blocks=1 00:37:51.338 --rc geninfo_unexecuted_blocks=1 00:37:51.338 00:37:51.338 ' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.338 --rc genhtml_branch_coverage=1 00:37:51.338 --rc genhtml_function_coverage=1 00:37:51.338 --rc genhtml_legend=1 00:37:51.338 --rc geninfo_all_blocks=1 00:37:51.338 --rc geninfo_unexecuted_blocks=1 00:37:51.338 00:37:51.338 ' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.338 --rc genhtml_branch_coverage=1 00:37:51.338 --rc genhtml_function_coverage=1 00:37:51.338 --rc genhtml_legend=1 00:37:51.338 --rc geninfo_all_blocks=1 00:37:51.338 --rc geninfo_unexecuted_blocks=1 00:37:51.338 00:37:51.338 ' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:51.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.338 --rc genhtml_branch_coverage=1 00:37:51.338 --rc genhtml_function_coverage=1 00:37:51.338 --rc genhtml_legend=1 00:37:51.338 --rc geninfo_all_blocks=1 00:37:51.338 --rc geninfo_unexecuted_blocks=1 00:37:51.338 00:37:51.338 ' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.338 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.339 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:53.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:53.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:53.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:53.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:53.870 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:53.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:53.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:37:53.870 00:37:53.870 --- 10.0.0.2 ping statistics --- 00:37:53.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.871 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:53.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:53.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:37:53.871 00:37:53.871 --- 10.0.0.1 ping statistics --- 00:37:53.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:53.871 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:37:53.871 ************************************ 00:37:53.871 START TEST nvmf_filesystem_no_in_capsule 00:37:53.871 ************************************ 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=587800 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 587800 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 587800 ']' 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.871 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:53.871 [2024-12-09 05:32:47.847180] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:37:53.871 [2024-12-09 05:32:47.847293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.871 [2024-12-09 05:32:47.926069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:53.871 [2024-12-09 05:32:47.986330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.871 [2024-12-09 05:32:47.986399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.871 [2024-12-09 05:32:47.986436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.871 [2024-12-09 05:32:47.986448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.871 [2024-12-09 05:32:47.986458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.871 [2024-12-09 05:32:47.988081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.871 [2024-12-09 05:32:47.988147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.871 [2024-12-09 05:32:47.988215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:53.871 [2024-12-09 05:32:47.988218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 [2024-12-09 05:32:48.146571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 Malloc1 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.129 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.129 [2024-12-09 05:32:48.343971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.130 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:54.388 { 00:37:54.388 "name": "Malloc1", 00:37:54.388 "aliases": [ 00:37:54.388 "63113d30-ce70-45d3-b744-ad37227f93e3" 00:37:54.388 ], 00:37:54.388 "product_name": "Malloc disk", 00:37:54.388 "block_size": 512, 00:37:54.388 "num_blocks": 1048576, 00:37:54.388 "uuid": "63113d30-ce70-45d3-b744-ad37227f93e3", 00:37:54.388 "assigned_rate_limits": { 00:37:54.388 "rw_ios_per_sec": 0, 00:37:54.388 "rw_mbytes_per_sec": 0, 00:37:54.388 "r_mbytes_per_sec": 0, 00:37:54.388 "w_mbytes_per_sec": 0 00:37:54.388 }, 00:37:54.388 "claimed": true, 00:37:54.388 "claim_type": "exclusive_write", 00:37:54.388 "zoned": false, 00:37:54.388 "supported_io_types": { 00:37:54.388 "read": true, 00:37:54.388 "write": true, 00:37:54.388 "unmap": true, 00:37:54.388 "flush": true, 00:37:54.388 "reset": true, 00:37:54.388 "nvme_admin": false, 00:37:54.388 "nvme_io": false, 00:37:54.388 "nvme_io_md": false, 00:37:54.388 "write_zeroes": true, 00:37:54.388 "zcopy": true, 00:37:54.388 "get_zone_info": false, 00:37:54.388 "zone_management": false, 00:37:54.388 "zone_append": false, 00:37:54.388 "compare": false, 00:37:54.388 "compare_and_write": false, 00:37:54.388 "abort": true, 00:37:54.388 "seek_hole": false, 00:37:54.388 "seek_data": false, 00:37:54.388 "copy": true, 00:37:54.388 "nvme_iov_md": false 00:37:54.388 }, 00:37:54.388 "memory_domains": [ 00:37:54.388 { 00:37:54.388 "dma_device_id": "system", 00:37:54.388 "dma_device_type": 1 00:37:54.388 }, 00:37:54.388 { 00:37:54.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:54.388 "dma_device_type": 2 00:37:54.388 } 00:37:54.388 ], 00:37:54.388 "driver_specific": {} 00:37:54.388 } 00:37:54.388 ]' 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:37:54.388 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:54.952 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:37:54.952 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:37:54.952 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:54.952 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:54.952 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:37:56.849 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:56.849 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:37:56.850 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:37:57.415 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:37:57.673 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:37:58.606 ************************************ 00:37:58.606 START TEST filesystem_ext4 00:37:58.606 ************************************ 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:37:58.606 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:37:58.606 mke2fs 1.47.0 (5-Feb-2023) 00:37:58.864 Discarding device blocks: 0/522240 done 00:37:58.864 Creating filesystem with 522240 1k blocks and 130560 inodes 00:37:58.864 Filesystem UUID: 79f9b74d-ee01-4260-bc5b-8c058f381400 00:37:58.864 Superblock backups stored on blocks: 00:37:58.864 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:37:58.864 00:37:58.864 Allocating group tables: 0/64 done 00:37:58.864 Writing inode tables: 0/64 done 00:37:59.121 Creating journal (8192 blocks): done 00:37:59.121 Writing superblocks and filesystem accounting information: 0/64 done 00:37:59.121 00:37:59.121 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:37:59.122 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 587800 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:04.380 00:38:04.380 real 0m5.757s 00:38:04.380 user 0m0.029s 00:38:04.380 sys 0m0.055s 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:38:04.380 ************************************ 00:38:04.380 END TEST filesystem_ext4 00:38:04.380 ************************************ 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:04.380 ************************************ 00:38:04.380 START TEST filesystem_btrfs 00:38:04.380 ************************************ 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:38:04.380 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:38:04.943 btrfs-progs v6.8.1 00:38:04.943 See https://btrfs.readthedocs.io for more information. 00:38:04.943 00:38:04.943 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:38:04.943 NOTE: several default settings have changed in version 5.15, please make sure 00:38:04.943 this does not affect your deployments: 00:38:04.943 - DUP for metadata (-m dup) 00:38:04.943 - enabled no-holes (-O no-holes) 00:38:04.943 - enabled free-space-tree (-R free-space-tree) 00:38:04.943 00:38:04.943 Label: (null) 00:38:04.943 UUID: b87a3af6-3879-47b4-9f53-3fea0812c965 00:38:04.943 Node size: 16384 00:38:04.943 Sector size: 4096 (CPU page size: 4096) 00:38:04.943 Filesystem size: 510.00MiB 00:38:04.943 Block group profiles: 00:38:04.943 Data: single 8.00MiB 00:38:04.943 Metadata: DUP 32.00MiB 00:38:04.943 System: DUP 8.00MiB 00:38:04.943 SSD detected: yes 00:38:04.943 Zoned device: no 00:38:04.943 Features: extref, skinny-metadata, no-holes, free-space-tree 00:38:04.943 Checksum: crc32c 00:38:04.943 Number of devices: 1 00:38:04.943 Devices: 00:38:04.943 ID SIZE PATH 00:38:04.943 1 510.00MiB /dev/nvme0n1p1 00:38:04.943 00:38:04.943 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:38:04.943 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 587800 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:05.200 00:38:05.200 real 0m0.752s 00:38:05.200 user 0m0.019s 00:38:05.200 sys 0m0.096s 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:38:05.200 ************************************ 00:38:05.200 END TEST filesystem_btrfs 00:38:05.200 ************************************ 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:05.200 ************************************ 00:38:05.200 START TEST filesystem_xfs 00:38:05.200 ************************************ 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:38:05.200 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:38:05.458 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:38:05.458 = sectsz=512 attr=2, projid32bit=1 00:38:05.458 = crc=1 finobt=1, sparse=1, rmapbt=0 00:38:05.458 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:38:05.458 data = bsize=4096 blocks=130560, imaxpct=25 00:38:05.458 = sunit=0 swidth=0 blks 00:38:05.458 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:38:05.458 log =internal log bsize=4096 blocks=16384, version=2 00:38:05.458 = sectsz=512 sunit=0 blks, lazy-count=1 00:38:05.458 realtime =none extsz=4096 blocks=0, rtextents=0 00:38:06.022 Discarding blocks...Done. 00:38:06.022 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:38:06.022 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 587800 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:07.915 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:07.915 00:38:07.915 real 0m2.644s 00:38:07.916 user 0m0.025s 00:38:07.916 sys 0m0.055s 00:38:07.916 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.916 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:38:07.916 ************************************ 00:38:07.916 END TEST filesystem_xfs 00:38:07.916 ************************************ 00:38:07.916 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:08.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 587800 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 587800 ']' 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 587800 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587800 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587800' 00:38:08.173 killing process with pid 587800 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 587800 00:38:08.173 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 587800 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:38:08.737 00:38:08.737 real 0m15.062s 00:38:08.737 user 0m58.086s 00:38:08.737 sys 0m2.001s 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:08.737 ************************************ 00:38:08.737 END TEST nvmf_filesystem_no_in_capsule 00:38:08.737 ************************************ 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:38:08.737 ************************************ 00:38:08.737 START TEST nvmf_filesystem_in_capsule 00:38:08.737 ************************************ 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:08.737 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=589882 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 589882 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 589882 ']' 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:08.738 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:08.738 [2024-12-09 05:33:02.961400] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:38:08.738 [2024-12-09 05:33:02.961508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.995 [2024-12-09 05:33:03.036245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:08.995 [2024-12-09 05:33:03.098133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.995 [2024-12-09 05:33:03.098200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.995 [2024-12-09 05:33:03.098230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.995 [2024-12-09 05:33:03.098243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.995 [2024-12-09 05:33:03.098254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.995 [2024-12-09 05:33:03.099967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.995 [2024-12-09 05:33:03.100033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:08.995 [2024-12-09 05:33:03.100097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:08.995 [2024-12-09 05:33:03.100100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 [2024-12-09 05:33:03.252689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.251 [2024-12-09 05:33:03.453999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.251 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:09.252 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.252 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:09.252 { 00:38:09.252 "name": "Malloc1", 00:38:09.252 "aliases": [ 00:38:09.252 "a031045e-e3cc-4468-a265-4a1c79a228c3" 00:38:09.252 ], 00:38:09.252 "product_name": "Malloc disk", 00:38:09.252 "block_size": 512, 00:38:09.252 "num_blocks": 1048576, 00:38:09.252 "uuid": "a031045e-e3cc-4468-a265-4a1c79a228c3", 00:38:09.252 "assigned_rate_limits": { 00:38:09.252 "rw_ios_per_sec": 0, 00:38:09.252 "rw_mbytes_per_sec": 0, 00:38:09.252 "r_mbytes_per_sec": 0, 00:38:09.252 "w_mbytes_per_sec": 0 00:38:09.252 }, 00:38:09.252 "claimed": true, 00:38:09.252 "claim_type": "exclusive_write", 00:38:09.252 "zoned": false, 00:38:09.252 "supported_io_types": { 00:38:09.252 "read": true, 00:38:09.252 "write": true, 00:38:09.252 "unmap": true, 00:38:09.252 "flush": true, 00:38:09.252 "reset": true, 00:38:09.252 "nvme_admin": false, 00:38:09.252 "nvme_io": false, 00:38:09.252 "nvme_io_md": false, 00:38:09.252 "write_zeroes": true, 00:38:09.252 "zcopy": true, 00:38:09.252 "get_zone_info": false, 00:38:09.252 "zone_management": false, 00:38:09.252 "zone_append": false, 00:38:09.252 "compare": false, 00:38:09.252 "compare_and_write": false, 00:38:09.252 "abort": true, 00:38:09.252 "seek_hole": false, 00:38:09.252 "seek_data": false, 00:38:09.252 "copy": true, 00:38:09.252 "nvme_iov_md": false 00:38:09.252 }, 00:38:09.252 "memory_domains": [ 00:38:09.252 { 00:38:09.252 "dma_device_id": "system", 00:38:09.252 "dma_device_type": 1 00:38:09.252 }, 00:38:09.252 { 00:38:09.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.252 "dma_device_type": 2 00:38:09.252 } 00:38:09.252 ], 00:38:09.252 "driver_specific": {} 00:38:09.252 } 00:38:09.252 ]' 00:38:09.252 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:38:09.508 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:10.071 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:38:10.071 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:38:10.071 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:10.071 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:10.071 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:38:12.001 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:12.001 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:12.001 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:12.001 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:12.001 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:38:12.002 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:38:12.258 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:38:12.516 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:13.906 ************************************ 00:38:13.906 START TEST filesystem_in_capsule_ext4 00:38:13.906 ************************************ 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:38:13.906 mke2fs 1.47.0 (5-Feb-2023) 00:38:13.906 Discarding device blocks: 0/522240 done 00:38:13.906 Creating filesystem with 522240 1k blocks and 130560 inodes 00:38:13.906 Filesystem UUID: 5d61f3c4-8190-49c5-b157-b79fdf321801 00:38:13.906 Superblock backups stored on blocks: 00:38:13.906 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:38:13.906 00:38:13.906 Allocating group tables: 0/64 done 00:38:13.906 Writing inode tables: 0/64 done 00:38:13.906 Creating journal (8192 blocks): done 00:38:13.906 Writing superblocks and filesystem accounting information: 0/64 done 00:38:13.906 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:38:13.906 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 589882 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:19.168 00:38:19.168 real 0m5.546s 00:38:19.168 user 0m0.016s 00:38:19.168 sys 0m0.052s 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:38:19.168 ************************************ 00:38:19.168 END TEST filesystem_in_capsule_ext4 00:38:19.168 ************************************ 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:19.168 ************************************ 00:38:19.168 START TEST filesystem_in_capsule_btrfs 00:38:19.168 ************************************ 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:38:19.168 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:38:19.169 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:38:19.426 btrfs-progs v6.8.1 00:38:19.426 See https://btrfs.readthedocs.io for more information. 00:38:19.426 00:38:19.426 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:38:19.426 NOTE: several default settings have changed in version 5.15, please make sure 00:38:19.426 this does not affect your deployments: 00:38:19.427 - DUP for metadata (-m dup) 00:38:19.427 - enabled no-holes (-O no-holes) 00:38:19.427 - enabled free-space-tree (-R free-space-tree) 00:38:19.427 00:38:19.427 Label: (null) 00:38:19.427 UUID: 98d8fea6-c938-42a0-9817-2afc6fe3f027 00:38:19.427 Node size: 16384 00:38:19.427 Sector size: 4096 (CPU page size: 4096) 00:38:19.427 Filesystem size: 510.00MiB 00:38:19.427 Block group profiles: 00:38:19.427 Data: single 8.00MiB 00:38:19.427 Metadata: DUP 32.00MiB 00:38:19.427 System: DUP 8.00MiB 00:38:19.427 SSD detected: yes 00:38:19.427 Zoned device: no 00:38:19.427 Features: extref, skinny-metadata, no-holes, free-space-tree 00:38:19.427 Checksum: crc32c 00:38:19.427 Number of devices: 1 00:38:19.427 Devices: 00:38:19.427 ID SIZE PATH 00:38:19.427 1 510.00MiB /dev/nvme0n1p1 00:38:19.427 00:38:19.427 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:38:19.427 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:19.684 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:19.684 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 589882 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:19.685 00:38:19.685 real 0m0.490s 00:38:19.685 user 0m0.016s 00:38:19.685 sys 0m0.100s 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:38:19.685 ************************************ 00:38:19.685 END TEST filesystem_in_capsule_btrfs 00:38:19.685 ************************************ 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:19.685 ************************************ 00:38:19.685 START TEST filesystem_in_capsule_xfs 00:38:19.685 ************************************ 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:38:19.685 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:38:19.942 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:38:19.942 = sectsz=512 attr=2, projid32bit=1 00:38:19.942 = crc=1 finobt=1, sparse=1, rmapbt=0 00:38:19.942 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:38:19.942 data = bsize=4096 blocks=130560, imaxpct=25 00:38:19.942 = sunit=0 swidth=0 blks 00:38:19.942 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:38:19.942 log =internal log bsize=4096 blocks=16384, version=2 00:38:19.942 = sectsz=512 sunit=0 blks, lazy-count=1 00:38:19.942 realtime =none extsz=4096 blocks=0, rtextents=0 00:38:20.507 Discarding blocks...Done. 00:38:20.507 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:38:20.507 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 589882 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:38:22.403 00:38:22.403 real 0m2.671s 00:38:22.403 user 0m0.023s 00:38:22.403 sys 0m0.053s 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:38:22.403 ************************************ 00:38:22.403 END TEST filesystem_in_capsule_xfs 00:38:22.403 ************************************ 00:38:22.403 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:38:22.660 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:38:22.660 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:22.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 589882 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 589882 ']' 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 589882 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589882 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589882' 00:38:22.916 killing process with pid 589882 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 589882 00:38:22.916 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 589882 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:38:23.481 00:38:23.481 real 0m14.527s 00:38:23.481 user 0m55.947s 00:38:23.481 sys 0m1.985s 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:38:23.481 ************************************ 00:38:23.481 END TEST nvmf_filesystem_in_capsule 00:38:23.481 ************************************ 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:23.481 rmmod nvme_tcp 00:38:23.481 rmmod nvme_fabrics 00:38:23.481 rmmod nvme_keyring 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:23.481 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.482 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:25.388 00:38:25.388 real 0m34.522s 00:38:25.388 user 1m55.133s 00:38:25.388 sys 0m5.842s 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:38:25.388 ************************************ 00:38:25.388 END TEST nvmf_filesystem 00:38:25.388 ************************************ 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.388 05:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:25.648 ************************************ 00:38:25.648 START TEST nvmf_target_discovery 00:38:25.648 ************************************ 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:38:25.648 * Looking for test storage... 00:38:25.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:38:25.648 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:25.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.649 --rc genhtml_branch_coverage=1 00:38:25.649 --rc genhtml_function_coverage=1 00:38:25.649 --rc genhtml_legend=1 00:38:25.649 --rc geninfo_all_blocks=1 00:38:25.649 --rc geninfo_unexecuted_blocks=1 00:38:25.649 00:38:25.649 ' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:25.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.649 --rc genhtml_branch_coverage=1 00:38:25.649 --rc genhtml_function_coverage=1 00:38:25.649 --rc genhtml_legend=1 00:38:25.649 --rc geninfo_all_blocks=1 00:38:25.649 --rc geninfo_unexecuted_blocks=1 00:38:25.649 00:38:25.649 ' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:25.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.649 --rc genhtml_branch_coverage=1 00:38:25.649 --rc genhtml_function_coverage=1 00:38:25.649 --rc genhtml_legend=1 00:38:25.649 --rc geninfo_all_blocks=1 00:38:25.649 --rc geninfo_unexecuted_blocks=1 00:38:25.649 00:38:25.649 ' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:25.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.649 --rc genhtml_branch_coverage=1 00:38:25.649 --rc genhtml_function_coverage=1 00:38:25.649 --rc genhtml_legend=1 00:38:25.649 --rc geninfo_all_blocks=1 00:38:25.649 --rc geninfo_unexecuted_blocks=1 00:38:25.649 00:38:25.649 ' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.649 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:25.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.650 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:38:28.184 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:28.185 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:28.185 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:28.185 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:28.185 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:28.185 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:28.185 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:28.185 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:28.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:38:28.186 00:38:28.186 --- 10.0.0.2 ping statistics --- 00:38:28.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.186 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:28.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:38:28.186 00:38:28.186 --- 10.0.0.1 ping statistics --- 00:38:28.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.186 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=594270 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 594270 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 594270 ']' 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:28.186 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.186 [2024-12-09 05:33:22.275262] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:38:28.186 [2024-12-09 05:33:22.275362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.186 [2024-12-09 05:33:22.347020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:28.186 [2024-12-09 05:33:22.404461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.186 [2024-12-09 05:33:22.404513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.186 [2024-12-09 05:33:22.404534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.186 [2024-12-09 05:33:22.404546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.186 [2024-12-09 05:33:22.404556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.186 [2024-12-09 05:33:22.406095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.186 [2024-12-09 05:33:22.406161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.186 [2024-12-09 05:33:22.406228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:28.186 [2024-12-09 05:33:22.406232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 [2024-12-09 05:33:22.555712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 Null1 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 [2024-12-09 05:33:22.601473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 Null2 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 Null3 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.445 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 Null4 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.703 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:38:28.961 00:38:28.961 Discovery Log Number of Records 6, Generation counter 6 00:38:28.961 =====Discovery Log Entry 0====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: current discovery subsystem 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4420 00:38:28.961 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: explicit discovery connections, duplicate discovery information 00:38:28.961 sectype: none 00:38:28.961 =====Discovery Log Entry 1====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: nvme subsystem 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4420 00:38:28.961 subnqn: nqn.2016-06.io.spdk:cnode1 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: none 00:38:28.961 sectype: none 00:38:28.961 =====Discovery Log Entry 2====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: nvme subsystem 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4420 00:38:28.961 subnqn: nqn.2016-06.io.spdk:cnode2 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: none 00:38:28.961 sectype: none 00:38:28.961 =====Discovery Log Entry 3====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: nvme subsystem 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4420 00:38:28.961 subnqn: nqn.2016-06.io.spdk:cnode3 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: none 00:38:28.961 sectype: none 00:38:28.961 =====Discovery Log Entry 4====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: nvme subsystem 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4420 00:38:28.961 subnqn: nqn.2016-06.io.spdk:cnode4 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: none 00:38:28.961 sectype: none 00:38:28.961 =====Discovery Log Entry 5====== 00:38:28.961 trtype: tcp 00:38:28.961 adrfam: ipv4 00:38:28.961 subtype: discovery subsystem referral 00:38:28.961 treq: not required 00:38:28.961 portid: 0 00:38:28.961 trsvcid: 4430 00:38:28.961 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:28.961 traddr: 10.0.0.2 00:38:28.961 eflags: none 00:38:28.961 sectype: none 00:38:28.961 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:38:28.961 Perform nvmf subsystem discovery via RPC 00:38:28.961 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:38:28.961 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.961 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.961 [ 00:38:28.961 { 00:38:28.961 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:28.961 "subtype": "Discovery", 00:38:28.961 "listen_addresses": [ 00:38:28.961 { 00:38:28.961 "trtype": "TCP", 00:38:28.961 "adrfam": "IPv4", 00:38:28.961 "traddr": "10.0.0.2", 00:38:28.961 "trsvcid": "4420" 00:38:28.961 } 00:38:28.961 ], 00:38:28.961 "allow_any_host": true, 00:38:28.961 "hosts": [] 00:38:28.961 }, 00:38:28.962 { 00:38:28.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:28.962 "subtype": "NVMe", 00:38:28.962 "listen_addresses": [ 00:38:28.962 { 00:38:28.962 "trtype": "TCP", 00:38:28.962 "adrfam": "IPv4", 00:38:28.962 "traddr": "10.0.0.2", 00:38:28.962 "trsvcid": "4420" 00:38:28.962 } 00:38:28.962 ], 00:38:28.962 "allow_any_host": true, 00:38:28.962 "hosts": [], 00:38:28.962 "serial_number": "SPDK00000000000001", 00:38:28.962 "model_number": "SPDK bdev Controller", 00:38:28.962 "max_namespaces": 32, 00:38:28.962 "min_cntlid": 1, 00:38:28.962 "max_cntlid": 65519, 00:38:28.962 "namespaces": [ 00:38:28.962 { 00:38:28.962 "nsid": 1, 00:38:28.962 "bdev_name": "Null1", 00:38:28.962 "name": "Null1", 00:38:28.962 "nguid": "6CEA29296F6941A594434938D9D7AB79", 00:38:28.962 "uuid": "6cea2929-6f69-41a5-9443-4938d9d7ab79" 00:38:28.962 } 00:38:28.962 ] 00:38:28.962 }, 00:38:28.962 { 00:38:28.962 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:28.962 "subtype": "NVMe", 00:38:28.962 "listen_addresses": [ 00:38:28.962 { 00:38:28.962 "trtype": "TCP", 00:38:28.962 "adrfam": "IPv4", 00:38:28.962 "traddr": "10.0.0.2", 00:38:28.962 "trsvcid": "4420" 00:38:28.962 } 00:38:28.962 ], 00:38:28.962 "allow_any_host": true, 00:38:28.962 "hosts": [], 00:38:28.962 "serial_number": "SPDK00000000000002", 00:38:28.962 "model_number": "SPDK bdev Controller", 00:38:28.962 "max_namespaces": 32, 00:38:28.962 "min_cntlid": 1, 00:38:28.962 "max_cntlid": 65519, 00:38:28.962 "namespaces": [ 00:38:28.962 { 00:38:28.962 "nsid": 1, 00:38:28.962 "bdev_name": "Null2", 00:38:28.962 "name": "Null2", 00:38:28.962 "nguid": "AD511B3406A64362BEECB3B57BE659A6", 00:38:28.962 "uuid": "ad511b34-06a6-4362-beec-b3b57be659a6" 00:38:28.962 } 00:38:28.962 ] 00:38:28.962 }, 00:38:28.962 { 00:38:28.962 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:38:28.962 "subtype": "NVMe", 00:38:28.962 "listen_addresses": [ 00:38:28.962 { 00:38:28.962 "trtype": "TCP", 00:38:28.962 "adrfam": "IPv4", 00:38:28.962 "traddr": "10.0.0.2", 00:38:28.962 "trsvcid": "4420" 00:38:28.962 } 00:38:28.962 ], 00:38:28.962 "allow_any_host": true, 00:38:28.962 "hosts": [], 00:38:28.962 "serial_number": "SPDK00000000000003", 00:38:28.962 "model_number": "SPDK bdev Controller", 00:38:28.962 "max_namespaces": 32, 00:38:28.962 "min_cntlid": 1, 00:38:28.962 "max_cntlid": 65519, 00:38:28.962 "namespaces": [ 00:38:28.962 { 00:38:28.962 "nsid": 1, 00:38:28.962 "bdev_name": "Null3", 00:38:28.962 "name": "Null3", 00:38:28.962 "nguid": "19C36BC46FD24C80BD37BF2BECCBFB95", 00:38:28.962 "uuid": "19c36bc4-6fd2-4c80-bd37-bf2beccbfb95" 00:38:28.962 } 00:38:28.962 ] 00:38:28.962 }, 00:38:28.962 { 00:38:28.962 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:38:28.962 "subtype": "NVMe", 00:38:28.962 "listen_addresses": [ 00:38:28.962 { 00:38:28.962 "trtype": "TCP", 00:38:28.962 "adrfam": "IPv4", 00:38:28.962 "traddr": "10.0.0.2", 00:38:28.962 "trsvcid": "4420" 00:38:28.962 } 00:38:28.962 ], 00:38:28.962 "allow_any_host": true, 00:38:28.962 "hosts": [], 00:38:28.962 "serial_number": "SPDK00000000000004", 00:38:28.962 "model_number": "SPDK bdev Controller", 00:38:28.962 "max_namespaces": 32, 00:38:28.962 "min_cntlid": 1, 00:38:28.962 "max_cntlid": 65519, 00:38:28.962 "namespaces": [ 00:38:28.962 { 00:38:28.962 "nsid": 1, 00:38:28.962 "bdev_name": "Null4", 00:38:28.962 "name": "Null4", 00:38:28.962 "nguid": "6585B0ADA142489CB95C9E29A2FB274B", 00:38:28.962 "uuid": "6585b0ad-a142-489c-b95c-9e29a2fb274b" 00:38:28.962 } 00:38:28.962 ] 00:38:28.962 } 00:38:28.962 ] 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.962 rmmod nvme_tcp 00:38:28.962 rmmod nvme_fabrics 00:38:28.962 rmmod nvme_keyring 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 594270 ']' 00:38:28.962 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 594270 00:38:28.963 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 594270 ']' 00:38:28.963 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 594270 00:38:28.963 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:38:28.963 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.963 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594270 00:38:29.221 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:29.221 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:29.221 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594270' 00:38:29.221 killing process with pid 594270 00:38:29.221 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 594270 00:38:29.221 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 594270 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.479 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:31.385 00:38:31.385 real 0m5.891s 00:38:31.385 user 0m5.016s 00:38:31.385 sys 0m2.013s 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:31.385 ************************************ 00:38:31.385 END TEST nvmf_target_discovery 00:38:31.385 ************************************ 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:31.385 ************************************ 00:38:31.385 START TEST nvmf_referrals 00:38:31.385 ************************************ 00:38:31.385 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:38:31.643 * Looking for test storage... 00:38:31.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:31.643 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:31.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.644 --rc genhtml_branch_coverage=1 00:38:31.644 --rc genhtml_function_coverage=1 00:38:31.644 --rc genhtml_legend=1 00:38:31.644 --rc geninfo_all_blocks=1 00:38:31.644 --rc geninfo_unexecuted_blocks=1 00:38:31.644 00:38:31.644 ' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:31.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.644 --rc genhtml_branch_coverage=1 00:38:31.644 --rc genhtml_function_coverage=1 00:38:31.644 --rc genhtml_legend=1 00:38:31.644 --rc geninfo_all_blocks=1 00:38:31.644 --rc geninfo_unexecuted_blocks=1 00:38:31.644 00:38:31.644 ' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:31.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.644 --rc genhtml_branch_coverage=1 00:38:31.644 --rc genhtml_function_coverage=1 00:38:31.644 --rc genhtml_legend=1 00:38:31.644 --rc geninfo_all_blocks=1 00:38:31.644 --rc geninfo_unexecuted_blocks=1 00:38:31.644 00:38:31.644 ' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:31.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.644 --rc genhtml_branch_coverage=1 00:38:31.644 --rc genhtml_function_coverage=1 00:38:31.644 --rc genhtml_legend=1 00:38:31.644 --rc geninfo_all_blocks=1 00:38:31.644 --rc geninfo_unexecuted_blocks=1 00:38:31.644 00:38:31.644 ' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:31.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:31.644 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:38:31.645 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:34.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.174 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:34.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:34.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:34.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:34.175 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:34.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:34.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:38:34.175 00:38:34.175 --- 10.0.0.2 ping statistics --- 00:38:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.175 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:34.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:34.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:38:34.175 00:38:34.175 --- 10.0.0.1 ping statistics --- 00:38:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.175 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=596372 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 596372 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 596372 ']' 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.175 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.175 [2024-12-09 05:33:28.196435] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:38:34.175 [2024-12-09 05:33:28.196531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.175 [2024-12-09 05:33:28.273521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:34.175 [2024-12-09 05:33:28.333373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.175 [2024-12-09 05:33:28.333438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.175 [2024-12-09 05:33:28.333452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.175 [2024-12-09 05:33:28.333463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.175 [2024-12-09 05:33:28.333472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.175 [2024-12-09 05:33:28.334994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.175 [2024-12-09 05:33:28.335021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:34.175 [2024-12-09 05:33:28.335080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:34.175 [2024-12-09 05:33:28.335083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 [2024-12-09 05:33:28.488390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 [2024-12-09 05:33:28.513492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:38:34.434 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:38:34.691 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:38:34.947 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.204 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.461 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:38:35.720 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.978 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:35.978 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:38:36.236 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.494 rmmod nvme_tcp 00:38:36.494 rmmod nvme_fabrics 00:38:36.494 rmmod nvme_keyring 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 596372 ']' 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 596372 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 596372 ']' 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 596372 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596372 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596372' 00:38:36.494 killing process with pid 596372 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 596372 00:38:36.494 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 596372 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.752 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.757 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.757 00:38:38.757 real 0m7.390s 00:38:38.757 user 0m11.627s 00:38:38.757 sys 0m2.420s 00:38:38.757 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.757 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:38:38.757 ************************************ 00:38:38.757 END TEST nvmf_referrals 00:38:38.757 ************************************ 00:38:38.757 05:33:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:38:39.035 05:33:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:39.035 05:33:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.035 05:33:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:39.035 ************************************ 00:38:39.035 START TEST nvmf_connect_disconnect 00:38:39.035 ************************************ 00:38:39.035 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:38:39.035 * Looking for test storage... 00:38:39.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:39.035 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.036 --rc genhtml_branch_coverage=1 00:38:39.036 --rc genhtml_function_coverage=1 00:38:39.036 --rc genhtml_legend=1 00:38:39.036 --rc geninfo_all_blocks=1 00:38:39.036 --rc geninfo_unexecuted_blocks=1 00:38:39.036 00:38:39.036 ' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.036 --rc genhtml_branch_coverage=1 00:38:39.036 --rc genhtml_function_coverage=1 00:38:39.036 --rc genhtml_legend=1 00:38:39.036 --rc geninfo_all_blocks=1 00:38:39.036 --rc geninfo_unexecuted_blocks=1 00:38:39.036 00:38:39.036 ' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.036 --rc genhtml_branch_coverage=1 00:38:39.036 --rc genhtml_function_coverage=1 00:38:39.036 --rc genhtml_legend=1 00:38:39.036 --rc geninfo_all_blocks=1 00:38:39.036 --rc geninfo_unexecuted_blocks=1 00:38:39.036 00:38:39.036 ' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.036 --rc genhtml_branch_coverage=1 00:38:39.036 --rc genhtml_function_coverage=1 00:38:39.036 --rc genhtml_legend=1 00:38:39.036 --rc geninfo_all_blocks=1 00:38:39.036 --rc geninfo_unexecuted_blocks=1 00:38:39.036 00:38:39.036 ' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:39.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:39.036 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:41.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:41.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:41.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.571 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:41.572 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:41.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:38:41.572 00:38:41.572 --- 10.0.0.2 ping statistics --- 00:38:41.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.572 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:38:41.572 00:38:41.572 --- 10.0.0.1 ping statistics --- 00:38:41.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.572 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=598799 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 598799 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 598799 ']' 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.572 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.572 [2024-12-09 05:33:35.536766] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:38:41.572 [2024-12-09 05:33:35.536835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.572 [2024-12-09 05:33:35.612439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:41.572 [2024-12-09 05:33:35.674863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.572 [2024-12-09 05:33:35.674925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.572 [2024-12-09 05:33:35.674954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.572 [2024-12-09 05:33:35.674965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.572 [2024-12-09 05:33:35.674975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.572 [2024-12-09 05:33:35.676711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.572 [2024-12-09 05:33:35.676778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.572 [2024-12-09 05:33:35.676809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:41.572 [2024-12-09 05:33:35.676812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 [2024-12-09 05:33:35.835050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:41.829 [2024-12-09 05:33:35.895394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:38:41.829 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:38:45.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:47.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:50.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:52.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:55.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:55.955 rmmod nvme_tcp 00:38:55.955 rmmod nvme_fabrics 00:38:55.955 rmmod nvme_keyring 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 598799 ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 598799 ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 598799' 00:38:55.955 killing process with pid 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 598799 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.955 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:57.854 00:38:57.854 real 0m18.947s 00:38:57.854 user 0m56.496s 00:38:57.854 sys 0m3.411s 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:57.854 ************************************ 00:38:57.854 END TEST nvmf_connect_disconnect 00:38:57.854 ************************************ 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:57.854 ************************************ 00:38:57.854 START TEST nvmf_multitarget 00:38:57.854 ************************************ 00:38:57.854 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:38:57.854 * Looking for test storage... 00:38:57.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:57.854 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:57.854 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:38:57.854 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.113 --rc genhtml_branch_coverage=1 00:38:58.113 --rc genhtml_function_coverage=1 00:38:58.113 --rc genhtml_legend=1 00:38:58.113 --rc geninfo_all_blocks=1 00:38:58.113 --rc geninfo_unexecuted_blocks=1 00:38:58.113 00:38:58.113 ' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.113 --rc genhtml_branch_coverage=1 00:38:58.113 --rc genhtml_function_coverage=1 00:38:58.113 --rc genhtml_legend=1 00:38:58.113 --rc geninfo_all_blocks=1 00:38:58.113 --rc geninfo_unexecuted_blocks=1 00:38:58.113 00:38:58.113 ' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.113 --rc genhtml_branch_coverage=1 00:38:58.113 --rc genhtml_function_coverage=1 00:38:58.113 --rc genhtml_legend=1 00:38:58.113 --rc geninfo_all_blocks=1 00:38:58.113 --rc geninfo_unexecuted_blocks=1 00:38:58.113 00:38:58.113 ' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:58.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.113 --rc genhtml_branch_coverage=1 00:38:58.113 --rc genhtml_function_coverage=1 00:38:58.113 --rc genhtml_legend=1 00:38:58.113 --rc geninfo_all_blocks=1 00:38:58.113 --rc geninfo_unexecuted_blocks=1 00:38:58.113 00:38:58.113 ' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:58.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:58.113 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.114 05:33:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:00.644 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:00.644 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:00.644 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:00.644 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:00.644 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:00.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:00.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:39:00.644 00:39:00.644 --- 10.0.0.2 ping statistics --- 00:39:00.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.644 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:00.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:00.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:39:00.645 00:39:00.645 --- 10.0.0.1 ping statistics --- 00:39:00.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.645 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=602453 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 602453 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 602453 ']' 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:39:00.645 [2024-12-09 05:33:54.522988] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:00.645 [2024-12-09 05:33:54.523082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:00.645 [2024-12-09 05:33:54.594769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:00.645 [2024-12-09 05:33:54.649890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.645 [2024-12-09 05:33:54.649935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.645 [2024-12-09 05:33:54.649974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.645 [2024-12-09 05:33:54.649986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.645 [2024-12-09 05:33:54.649996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.645 [2024-12-09 05:33:54.651572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.645 [2024-12-09 05:33:54.651655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.645 [2024-12-09 05:33:54.651652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:00.645 [2024-12-09 05:33:54.651604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:39:00.645 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:39:00.903 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:39:00.903 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:39:00.903 "nvmf_tgt_1" 00:39:00.903 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:39:01.160 "nvmf_tgt_2" 00:39:01.160 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:39:01.160 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:39:01.160 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:39:01.160 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:39:01.417 true 00:39:01.417 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:39:01.417 true 00:39:01.417 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:39:01.417 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.675 rmmod nvme_tcp 00:39:01.675 rmmod nvme_fabrics 00:39:01.675 rmmod nvme_keyring 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 602453 ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 602453 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 602453 ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 602453 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602453 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602453' 00:39:01.675 killing process with pid 602453 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 602453 00:39:01.675 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 602453 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.934 05:33:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.841 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.841 00:39:03.841 real 0m6.058s 00:39:03.841 user 0m6.945s 00:39:03.841 sys 0m2.090s 00:39:03.841 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.841 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:39:03.841 ************************************ 00:39:03.841 END TEST nvmf_multitarget 00:39:03.841 ************************************ 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:04.100 ************************************ 00:39:04.100 START TEST nvmf_rpc 00:39:04.100 ************************************ 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:39:04.100 * Looking for test storage... 00:39:04.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:04.100 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.101 --rc genhtml_branch_coverage=1 00:39:04.101 --rc genhtml_function_coverage=1 00:39:04.101 --rc genhtml_legend=1 00:39:04.101 --rc geninfo_all_blocks=1 00:39:04.101 --rc geninfo_unexecuted_blocks=1 00:39:04.101 00:39:04.101 ' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.101 --rc genhtml_branch_coverage=1 00:39:04.101 --rc genhtml_function_coverage=1 00:39:04.101 --rc genhtml_legend=1 00:39:04.101 --rc geninfo_all_blocks=1 00:39:04.101 --rc geninfo_unexecuted_blocks=1 00:39:04.101 00:39:04.101 ' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.101 --rc genhtml_branch_coverage=1 00:39:04.101 --rc genhtml_function_coverage=1 00:39:04.101 --rc genhtml_legend=1 00:39:04.101 --rc geninfo_all_blocks=1 00:39:04.101 --rc geninfo_unexecuted_blocks=1 00:39:04.101 00:39:04.101 ' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.101 --rc genhtml_branch_coverage=1 00:39:04.101 --rc genhtml_function_coverage=1 00:39:04.101 --rc genhtml_legend=1 00:39:04.101 --rc geninfo_all_blocks=1 00:39:04.101 --rc geninfo_unexecuted_blocks=1 00:39:04.101 00:39:04.101 ' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:04.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:04.101 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:39:04.102 05:33:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:39:06.636 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:06.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:06.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:06.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:06.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:06.637 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:06.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:06.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:39:06.638 00:39:06.638 --- 10.0.0.2 ping statistics --- 00:39:06.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.638 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:06.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:06.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:39:06.638 00:39:06.638 --- 10.0.0.1 ping statistics --- 00:39:06.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:06.638 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=604670 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 604670 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 604670 ']' 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:06.638 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.638 [2024-12-09 05:34:00.596627] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:06.638 [2024-12-09 05:34:00.596694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:06.638 [2024-12-09 05:34:00.665334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:06.638 [2024-12-09 05:34:00.719812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:06.638 [2024-12-09 05:34:00.719888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:06.638 [2024-12-09 05:34:00.719901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:06.638 [2024-12-09 05:34:00.719912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:06.638 [2024-12-09 05:34:00.719922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:06.638 [2024-12-09 05:34:00.721475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.638 [2024-12-09 05:34:00.721555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:06.638 [2024-12-09 05:34:00.721631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.638 [2024-12-09 05:34:00.721621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:39:06.896 "tick_rate": 2700000000, 00:39:06.896 "poll_groups": [ 00:39:06.896 { 00:39:06.896 "name": "nvmf_tgt_poll_group_000", 00:39:06.896 "admin_qpairs": 0, 00:39:06.896 "io_qpairs": 0, 00:39:06.896 "current_admin_qpairs": 0, 00:39:06.896 "current_io_qpairs": 0, 00:39:06.896 "pending_bdev_io": 0, 00:39:06.896 "completed_nvme_io": 0, 00:39:06.896 "transports": [] 00:39:06.896 }, 00:39:06.896 { 00:39:06.896 "name": "nvmf_tgt_poll_group_001", 00:39:06.896 "admin_qpairs": 0, 00:39:06.896 "io_qpairs": 0, 00:39:06.896 "current_admin_qpairs": 0, 00:39:06.896 "current_io_qpairs": 0, 00:39:06.896 "pending_bdev_io": 0, 00:39:06.896 "completed_nvme_io": 0, 00:39:06.896 "transports": [] 00:39:06.896 }, 00:39:06.896 { 00:39:06.896 "name": "nvmf_tgt_poll_group_002", 00:39:06.896 "admin_qpairs": 0, 00:39:06.896 "io_qpairs": 0, 00:39:06.896 "current_admin_qpairs": 0, 00:39:06.896 "current_io_qpairs": 0, 00:39:06.896 "pending_bdev_io": 0, 00:39:06.896 "completed_nvme_io": 0, 00:39:06.896 "transports": [] 00:39:06.896 }, 00:39:06.896 { 00:39:06.896 "name": "nvmf_tgt_poll_group_003", 00:39:06.896 "admin_qpairs": 0, 00:39:06.896 "io_qpairs": 0, 00:39:06.896 "current_admin_qpairs": 0, 00:39:06.896 "current_io_qpairs": 0, 00:39:06.896 "pending_bdev_io": 0, 00:39:06.896 "completed_nvme_io": 0, 00:39:06.896 "transports": [] 00:39:06.896 } 00:39:06.896 ] 00:39:06.896 }' 00:39:06.896 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.897 [2024-12-09 05:34:00.966753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:39:06.897 "tick_rate": 2700000000, 00:39:06.897 "poll_groups": [ 00:39:06.897 { 00:39:06.897 "name": "nvmf_tgt_poll_group_000", 00:39:06.897 "admin_qpairs": 0, 00:39:06.897 "io_qpairs": 0, 00:39:06.897 "current_admin_qpairs": 0, 00:39:06.897 "current_io_qpairs": 0, 00:39:06.897 "pending_bdev_io": 0, 00:39:06.897 "completed_nvme_io": 0, 00:39:06.897 "transports": [ 00:39:06.897 { 00:39:06.897 "trtype": "TCP" 00:39:06.897 } 00:39:06.897 ] 00:39:06.897 }, 00:39:06.897 { 00:39:06.897 "name": "nvmf_tgt_poll_group_001", 00:39:06.897 "admin_qpairs": 0, 00:39:06.897 "io_qpairs": 0, 00:39:06.897 "current_admin_qpairs": 0, 00:39:06.897 "current_io_qpairs": 0, 00:39:06.897 "pending_bdev_io": 0, 00:39:06.897 "completed_nvme_io": 0, 00:39:06.897 "transports": [ 00:39:06.897 { 00:39:06.897 "trtype": "TCP" 00:39:06.897 } 00:39:06.897 ] 00:39:06.897 }, 00:39:06.897 { 00:39:06.897 "name": "nvmf_tgt_poll_group_002", 00:39:06.897 "admin_qpairs": 0, 00:39:06.897 "io_qpairs": 0, 00:39:06.897 "current_admin_qpairs": 0, 00:39:06.897 "current_io_qpairs": 0, 00:39:06.897 "pending_bdev_io": 0, 00:39:06.897 "completed_nvme_io": 0, 00:39:06.897 "transports": [ 00:39:06.897 { 00:39:06.897 "trtype": "TCP" 00:39:06.897 } 00:39:06.897 ] 00:39:06.897 }, 00:39:06.897 { 00:39:06.897 "name": "nvmf_tgt_poll_group_003", 00:39:06.897 "admin_qpairs": 0, 00:39:06.897 "io_qpairs": 0, 00:39:06.897 "current_admin_qpairs": 0, 00:39:06.897 "current_io_qpairs": 0, 00:39:06.897 "pending_bdev_io": 0, 00:39:06.897 "completed_nvme_io": 0, 00:39:06.897 "transports": [ 00:39:06.897 { 00:39:06.897 "trtype": "TCP" 00:39:06.897 } 00:39:06.897 ] 00:39:06.897 } 00:39:06.897 ] 00:39:06.897 }' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:39:06.897 05:34:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.897 Malloc1 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.897 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:07.155 [2024-12-09 05:34:01.130232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:39:07.155 [2024-12-09 05:34:01.152771] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:39:07.155 Failed to write to /dev/nvme-fabrics: Input/output error 00:39:07.155 could not add new controller: failed to write to nvme-fabrics device 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:07.155 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:07.720 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:39:07.720 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:07.720 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:07.720 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:07.720 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:09.614 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:09.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:09.872 [2024-12-09 05:34:03.895304] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:39:09.872 Failed to write to /dev/nvme-fabrics: Input/output error 00:39:09.872 could not add new controller: failed to write to nvme-fabrics device 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.872 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:10.438 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:39:10.438 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:10.438 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:10.438 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:10.438 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:12.335 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:12.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:12.592 [2024-12-09 05:34:06.688174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.592 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:13.524 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:39:13.524 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:13.524 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:13.524 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:13.524 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:15.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.430 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.431 [2024-12-09 05:34:09.523358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.431 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:16.364 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:39:16.364 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:16.364 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:16.364 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:16.364 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:18.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 [2024-12-09 05:34:12.345931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.262 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:18.827 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:39:18.827 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:18.827 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:18.827 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:18.827 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:21.351 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:21.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.351 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.352 [2024-12-09 05:34:15.142005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.352 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:21.610 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:39:21.610 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:21.610 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:21.610 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:21.610 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:24.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:24.134 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 [2024-12-09 05:34:17.917120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.135 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:24.700 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:39:24.700 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:39:24.700 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:24.700 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:24.700 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:26.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 [2024-12-09 05:34:20.750858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.601 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.602 [2024-12-09 05:34:20.798899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.602 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.859 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 [2024-12-09 05:34:20.847057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 [2024-12-09 05:34:20.895219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 [2024-12-09 05:34:20.943423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:39:26.860 "tick_rate": 2700000000, 00:39:26.860 "poll_groups": [ 00:39:26.860 { 00:39:26.860 "name": "nvmf_tgt_poll_group_000", 00:39:26.860 "admin_qpairs": 2, 00:39:26.860 "io_qpairs": 84, 00:39:26.860 "current_admin_qpairs": 0, 00:39:26.860 "current_io_qpairs": 0, 00:39:26.860 "pending_bdev_io": 0, 00:39:26.860 "completed_nvme_io": 135, 00:39:26.860 "transports": [ 00:39:26.860 { 00:39:26.860 "trtype": "TCP" 00:39:26.860 } 00:39:26.860 ] 00:39:26.860 }, 00:39:26.860 { 00:39:26.860 "name": "nvmf_tgt_poll_group_001", 00:39:26.860 "admin_qpairs": 2, 00:39:26.860 "io_qpairs": 84, 00:39:26.860 "current_admin_qpairs": 0, 00:39:26.860 "current_io_qpairs": 0, 00:39:26.860 "pending_bdev_io": 0, 00:39:26.860 "completed_nvme_io": 331, 00:39:26.860 "transports": [ 00:39:26.860 { 00:39:26.860 "trtype": "TCP" 00:39:26.860 } 00:39:26.860 ] 00:39:26.860 }, 00:39:26.860 { 00:39:26.860 "name": "nvmf_tgt_poll_group_002", 00:39:26.860 "admin_qpairs": 1, 00:39:26.860 "io_qpairs": 84, 00:39:26.860 "current_admin_qpairs": 0, 00:39:26.860 "current_io_qpairs": 0, 00:39:26.860 "pending_bdev_io": 0, 00:39:26.860 "completed_nvme_io": 87, 00:39:26.860 "transports": [ 00:39:26.860 { 00:39:26.860 "trtype": "TCP" 00:39:26.860 } 00:39:26.860 ] 00:39:26.860 }, 00:39:26.860 { 00:39:26.860 "name": "nvmf_tgt_poll_group_003", 00:39:26.860 "admin_qpairs": 2, 00:39:26.860 "io_qpairs": 84, 00:39:26.860 "current_admin_qpairs": 0, 00:39:26.860 "current_io_qpairs": 0, 00:39:26.860 "pending_bdev_io": 0, 00:39:26.860 "completed_nvme_io": 133, 00:39:26.860 "transports": [ 00:39:26.860 { 00:39:26.860 "trtype": "TCP" 00:39:26.860 } 00:39:26.860 ] 00:39:26.860 } 00:39:26.860 ] 00:39:26.860 }' 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:39:26.860 05:34:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:39:26.860 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.861 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.861 rmmod nvme_tcp 00:39:27.118 rmmod nvme_fabrics 00:39:27.118 rmmod nvme_keyring 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 604670 ']' 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 604670 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 604670 ']' 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 604670 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604670 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604670' 00:39:27.118 killing process with pid 604670 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 604670 00:39:27.118 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 604670 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.377 05:34:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:29.914 00:39:29.914 real 0m25.411s 00:39:29.914 user 1m22.311s 00:39:29.914 sys 0m4.113s 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:39:29.914 ************************************ 00:39:29.914 END TEST nvmf_rpc 00:39:29.914 ************************************ 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:29.914 ************************************ 00:39:29.914 START TEST nvmf_invalid 00:39:29.914 ************************************ 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:39:29.914 * Looking for test storage... 00:39:29.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:39:29.914 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.915 --rc genhtml_branch_coverage=1 00:39:29.915 --rc genhtml_function_coverage=1 00:39:29.915 --rc genhtml_legend=1 00:39:29.915 --rc geninfo_all_blocks=1 00:39:29.915 --rc geninfo_unexecuted_blocks=1 00:39:29.915 00:39:29.915 ' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.915 --rc genhtml_branch_coverage=1 00:39:29.915 --rc genhtml_function_coverage=1 00:39:29.915 --rc genhtml_legend=1 00:39:29.915 --rc geninfo_all_blocks=1 00:39:29.915 --rc geninfo_unexecuted_blocks=1 00:39:29.915 00:39:29.915 ' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.915 --rc genhtml_branch_coverage=1 00:39:29.915 --rc genhtml_function_coverage=1 00:39:29.915 --rc genhtml_legend=1 00:39:29.915 --rc geninfo_all_blocks=1 00:39:29.915 --rc geninfo_unexecuted_blocks=1 00:39:29.915 00:39:29.915 ' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:29.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.915 --rc genhtml_branch_coverage=1 00:39:29.915 --rc genhtml_function_coverage=1 00:39:29.915 --rc genhtml_legend=1 00:39:29.915 --rc geninfo_all_blocks=1 00:39:29.915 --rc geninfo_unexecuted_blocks=1 00:39:29.915 00:39:29.915 ' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:29.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:29.915 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.916 05:34:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:31.825 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:31.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:31.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:31.826 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:31.826 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:31.826 05:34:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:31.826 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:31.826 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:31.826 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:31.826 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:32.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:39:32.085 00:39:32.085 --- 10.0.0.2 ping statistics --- 00:39:32.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.085 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:39:32.085 00:39:32.085 --- 10.0.0.1 ping statistics --- 00:39:32.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.085 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=609166 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 609166 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 609166 ']' 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:32.085 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:39:32.085 [2024-12-09 05:34:26.274741] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:32.085 [2024-12-09 05:34:26.274840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.343 [2024-12-09 05:34:26.348257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:32.343 [2024-12-09 05:34:26.406212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.343 [2024-12-09 05:34:26.406298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.343 [2024-12-09 05:34:26.406314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.343 [2024-12-09 05:34:26.406326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.343 [2024-12-09 05:34:26.406337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.343 [2024-12-09 05:34:26.407870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.343 [2024-12-09 05:34:26.407930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:32.343 [2024-12-09 05:34:26.407996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:32.343 [2024-12-09 05:34:26.407999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:39:32.343 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31560 00:39:32.599 [2024-12-09 05:34:26.809690] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:39:32.857 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:39:32.857 { 00:39:32.857 "nqn": "nqn.2016-06.io.spdk:cnode31560", 00:39:32.857 "tgt_name": "foobar", 00:39:32.857 "method": "nvmf_create_subsystem", 00:39:32.857 "req_id": 1 00:39:32.857 } 00:39:32.857 Got JSON-RPC error response 00:39:32.857 response: 00:39:32.857 { 00:39:32.857 "code": -32603, 00:39:32.857 "message": "Unable to find target foobar" 00:39:32.857 }' 00:39:32.857 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:39:32.857 { 00:39:32.857 "nqn": "nqn.2016-06.io.spdk:cnode31560", 00:39:32.857 "tgt_name": "foobar", 00:39:32.857 "method": "nvmf_create_subsystem", 00:39:32.857 "req_id": 1 00:39:32.857 } 00:39:32.857 Got JSON-RPC error response 00:39:32.857 response: 00:39:32.857 { 00:39:32.857 "code": -32603, 00:39:32.857 "message": "Unable to find target foobar" 00:39:32.857 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:39:32.857 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:39:32.857 05:34:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27289 00:39:33.114 [2024-12-09 05:34:27.082663] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27289: invalid serial number 'SPDKISFASTANDAWESOME' 00:39:33.114 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:39:33.114 { 00:39:33.114 "nqn": "nqn.2016-06.io.spdk:cnode27289", 00:39:33.114 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:39:33.114 "method": "nvmf_create_subsystem", 00:39:33.114 "req_id": 1 00:39:33.114 } 00:39:33.114 Got JSON-RPC error response 00:39:33.114 response: 00:39:33.114 { 00:39:33.114 "code": -32602, 00:39:33.114 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:39:33.114 }' 00:39:33.114 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:39:33.114 { 00:39:33.114 "nqn": "nqn.2016-06.io.spdk:cnode27289", 00:39:33.114 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:39:33.114 "method": "nvmf_create_subsystem", 00:39:33.114 "req_id": 1 00:39:33.114 } 00:39:33.114 Got JSON-RPC error response 00:39:33.114 response: 00:39:33.114 { 00:39:33.114 "code": -32602, 00:39:33.114 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:39:33.114 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:39:33.114 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:39:33.114 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15326 00:39:33.372 [2024-12-09 05:34:27.343460] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15326: invalid model number 'SPDK_Controller' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:39:33.372 { 00:39:33.372 "nqn": "nqn.2016-06.io.spdk:cnode15326", 00:39:33.372 "model_number": "SPDK_Controller\u001f", 00:39:33.372 "method": "nvmf_create_subsystem", 00:39:33.372 "req_id": 1 00:39:33.372 } 00:39:33.372 Got JSON-RPC error response 00:39:33.372 response: 00:39:33.372 { 00:39:33.372 "code": -32602, 00:39:33.372 "message": "Invalid MN SPDK_Controller\u001f" 00:39:33.372 }' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:39:33.372 { 00:39:33.372 "nqn": "nqn.2016-06.io.spdk:cnode15326", 00:39:33.372 "model_number": "SPDK_Controller\u001f", 00:39:33.372 "method": "nvmf_create_subsystem", 00:39:33.372 "req_id": 1 00:39:33.372 } 00:39:33.372 Got JSON-RPC error response 00:39:33.372 response: 00:39:33.372 { 00:39:33.372 "code": -32602, 00:39:33.372 "message": "Invalid MN SPDK_Controller\u001f" 00:39:33.372 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:39:33.372 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W5}[X)9I%R,qE+{6@6H4U' 00:39:33.373 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W5}[X)9I%R,qE+{6@6H4U' nqn.2016-06.io.spdk:cnode2571 00:39:33.631 [2024-12-09 05:34:27.716771] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2571: invalid serial number 'W5}[X)9I%R,qE+{6@6H4U' 00:39:33.631 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:39:33.631 { 00:39:33.631 "nqn": "nqn.2016-06.io.spdk:cnode2571", 00:39:33.632 "serial_number": "W5}[X)9I%R,qE+{6@6H4U", 00:39:33.632 "method": "nvmf_create_subsystem", 00:39:33.632 "req_id": 1 00:39:33.632 } 00:39:33.632 Got JSON-RPC error response 00:39:33.632 response: 00:39:33.632 { 00:39:33.632 "code": -32602, 00:39:33.632 "message": "Invalid SN W5}[X)9I%R,qE+{6@6H4U" 00:39:33.632 }' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:39:33.632 { 00:39:33.632 "nqn": "nqn.2016-06.io.spdk:cnode2571", 00:39:33.632 "serial_number": "W5}[X)9I%R,qE+{6@6H4U", 00:39:33.632 "method": "nvmf_create_subsystem", 00:39:33.632 "req_id": 1 00:39:33.632 } 00:39:33.632 Got JSON-RPC error response 00:39:33.632 response: 00:39:33.632 { 00:39:33.632 "code": -32602, 00:39:33.632 "message": "Invalid SN W5}[X)9I%R,qE+{6@6H4U" 00:39:33.632 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.632 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.633 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'K\Wz{$S"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~' 00:39:33.891 05:34:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'K\Wz{$S"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~' nqn.2016-06.io.spdk:cnode16192 00:39:33.891 [2024-12-09 05:34:28.098036] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16192: invalid model number 'K\Wz{$S"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~' 00:39:34.149 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:39:34.149 { 00:39:34.149 "nqn": "nqn.2016-06.io.spdk:cnode16192", 00:39:34.149 "model_number": "K\\Wz{$S\"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~", 00:39:34.149 "method": "nvmf_create_subsystem", 00:39:34.149 "req_id": 1 00:39:34.149 } 00:39:34.149 Got JSON-RPC error response 00:39:34.149 response: 00:39:34.149 { 00:39:34.149 "code": -32602, 00:39:34.149 "message": "Invalid MN K\\Wz{$S\"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~" 00:39:34.149 }' 00:39:34.149 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:39:34.149 { 00:39:34.149 "nqn": "nqn.2016-06.io.spdk:cnode16192", 00:39:34.149 "model_number": "K\\Wz{$S\"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~", 00:39:34.149 "method": "nvmf_create_subsystem", 00:39:34.149 "req_id": 1 00:39:34.149 } 00:39:34.149 Got JSON-RPC error response 00:39:34.149 response: 00:39:34.149 { 00:39:34.149 "code": -32602, 00:39:34.149 "message": "Invalid MN K\\Wz{$S\"{B.ty.14J@RFjS{%PfMwGA{jM/Dk+X}b~" 00:39:34.149 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:39:34.149 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:39:34.149 [2024-12-09 05:34:28.362975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.406 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:39:34.664 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:39:34.664 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:39:34.664 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:39:34.664 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:39:34.664 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:39:34.922 [2024-12-09 05:34:28.908774] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:39:34.922 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:39:34.922 { 00:39:34.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:39:34.922 "listen_address": { 00:39:34.922 "trtype": "tcp", 00:39:34.922 "traddr": "", 00:39:34.922 "trsvcid": "4421" 00:39:34.922 }, 00:39:34.922 "method": "nvmf_subsystem_remove_listener", 00:39:34.922 "req_id": 1 00:39:34.922 } 00:39:34.922 Got JSON-RPC error response 00:39:34.922 response: 00:39:34.922 { 00:39:34.922 "code": -32602, 00:39:34.922 "message": "Invalid parameters" 00:39:34.922 }' 00:39:34.922 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:39:34.922 { 00:39:34.922 "nqn": "nqn.2016-06.io.spdk:cnode", 00:39:34.922 "listen_address": { 00:39:34.922 "trtype": "tcp", 00:39:34.922 "traddr": "", 00:39:34.922 "trsvcid": "4421" 00:39:34.922 }, 00:39:34.922 "method": "nvmf_subsystem_remove_listener", 00:39:34.922 "req_id": 1 00:39:34.922 } 00:39:34.922 Got JSON-RPC error response 00:39:34.922 response: 00:39:34.922 { 00:39:34.922 "code": -32602, 00:39:34.922 "message": "Invalid parameters" 00:39:34.922 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:39:34.922 05:34:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19204 -i 0 00:39:35.180 [2024-12-09 05:34:29.181637] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19204: invalid cntlid range [0-65519] 00:39:35.180 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:39:35.180 { 00:39:35.180 "nqn": "nqn.2016-06.io.spdk:cnode19204", 00:39:35.180 "min_cntlid": 0, 00:39:35.180 "method": "nvmf_create_subsystem", 00:39:35.180 "req_id": 1 00:39:35.180 } 00:39:35.180 Got JSON-RPC error response 00:39:35.180 response: 00:39:35.180 { 00:39:35.180 "code": -32602, 00:39:35.180 "message": "Invalid cntlid range [0-65519]" 00:39:35.180 }' 00:39:35.180 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:39:35.180 { 00:39:35.180 "nqn": "nqn.2016-06.io.spdk:cnode19204", 00:39:35.180 "min_cntlid": 0, 00:39:35.180 "method": "nvmf_create_subsystem", 00:39:35.180 "req_id": 1 00:39:35.180 } 00:39:35.180 Got JSON-RPC error response 00:39:35.180 response: 00:39:35.180 { 00:39:35.180 "code": -32602, 00:39:35.180 "message": "Invalid cntlid range [0-65519]" 00:39:35.180 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:39:35.180 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15377 -i 65520 00:39:35.439 [2024-12-09 05:34:29.438535] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15377: invalid cntlid range [65520-65519] 00:39:35.439 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:39:35.439 { 00:39:35.439 "nqn": "nqn.2016-06.io.spdk:cnode15377", 00:39:35.439 "min_cntlid": 65520, 00:39:35.439 "method": "nvmf_create_subsystem", 00:39:35.439 "req_id": 1 00:39:35.439 } 00:39:35.439 Got JSON-RPC error response 00:39:35.439 response: 00:39:35.439 { 00:39:35.439 "code": -32602, 00:39:35.439 "message": "Invalid cntlid range [65520-65519]" 00:39:35.439 }' 00:39:35.439 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:39:35.439 { 00:39:35.439 "nqn": "nqn.2016-06.io.spdk:cnode15377", 00:39:35.439 "min_cntlid": 65520, 00:39:35.439 "method": "nvmf_create_subsystem", 00:39:35.439 "req_id": 1 00:39:35.439 } 00:39:35.439 Got JSON-RPC error response 00:39:35.439 response: 00:39:35.439 { 00:39:35.439 "code": -32602, 00:39:35.439 "message": "Invalid cntlid range [65520-65519]" 00:39:35.439 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:39:35.439 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4010 -I 0 00:39:35.697 [2024-12-09 05:34:29.703403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4010: invalid cntlid range [1-0] 00:39:35.697 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:39:35.697 { 00:39:35.697 "nqn": "nqn.2016-06.io.spdk:cnode4010", 00:39:35.697 "max_cntlid": 0, 00:39:35.697 "method": "nvmf_create_subsystem", 00:39:35.697 "req_id": 1 00:39:35.697 } 00:39:35.697 Got JSON-RPC error response 00:39:35.697 response: 00:39:35.697 { 00:39:35.697 "code": -32602, 00:39:35.697 "message": "Invalid cntlid range [1-0]" 00:39:35.697 }' 00:39:35.697 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:39:35.697 { 00:39:35.697 "nqn": "nqn.2016-06.io.spdk:cnode4010", 00:39:35.697 "max_cntlid": 0, 00:39:35.697 "method": "nvmf_create_subsystem", 00:39:35.697 "req_id": 1 00:39:35.697 } 00:39:35.697 Got JSON-RPC error response 00:39:35.697 response: 00:39:35.697 { 00:39:35.697 "code": -32602, 00:39:35.697 "message": "Invalid cntlid range [1-0]" 00:39:35.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:39:35.697 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32477 -I 65520 00:39:35.955 [2024-12-09 05:34:29.972309] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32477: invalid cntlid range [1-65520] 00:39:35.955 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:39:35.955 { 00:39:35.955 "nqn": "nqn.2016-06.io.spdk:cnode32477", 00:39:35.955 "max_cntlid": 65520, 00:39:35.955 "method": "nvmf_create_subsystem", 00:39:35.955 "req_id": 1 00:39:35.955 } 00:39:35.955 Got JSON-RPC error response 00:39:35.955 response: 00:39:35.955 { 00:39:35.955 "code": -32602, 00:39:35.955 "message": "Invalid cntlid range [1-65520]" 00:39:35.955 }' 00:39:35.955 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:39:35.955 { 00:39:35.955 "nqn": "nqn.2016-06.io.spdk:cnode32477", 00:39:35.955 "max_cntlid": 65520, 00:39:35.955 "method": "nvmf_create_subsystem", 00:39:35.955 "req_id": 1 00:39:35.955 } 00:39:35.955 Got JSON-RPC error response 00:39:35.955 response: 00:39:35.955 { 00:39:35.955 "code": -32602, 00:39:35.955 "message": "Invalid cntlid range [1-65520]" 00:39:35.955 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:39:35.955 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7350 -i 6 -I 5 00:39:36.213 [2024-12-09 05:34:30.253309] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7350: invalid cntlid range [6-5] 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:39:36.213 { 00:39:36.213 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:39:36.213 "min_cntlid": 6, 00:39:36.213 "max_cntlid": 5, 00:39:36.213 "method": "nvmf_create_subsystem", 00:39:36.213 "req_id": 1 00:39:36.213 } 00:39:36.213 Got JSON-RPC error response 00:39:36.213 response: 00:39:36.213 { 00:39:36.213 "code": -32602, 00:39:36.213 "message": "Invalid cntlid range [6-5]" 00:39:36.213 }' 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:39:36.213 { 00:39:36.213 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:39:36.213 "min_cntlid": 6, 00:39:36.213 "max_cntlid": 5, 00:39:36.213 "method": "nvmf_create_subsystem", 00:39:36.213 "req_id": 1 00:39:36.213 } 00:39:36.213 Got JSON-RPC error response 00:39:36.213 response: 00:39:36.213 { 00:39:36.213 "code": -32602, 00:39:36.213 "message": "Invalid cntlid range [6-5]" 00:39:36.213 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:39:36.213 { 00:39:36.213 "name": "foobar", 00:39:36.213 "method": "nvmf_delete_target", 00:39:36.213 "req_id": 1 00:39:36.213 } 00:39:36.213 Got JSON-RPC error response 00:39:36.213 response: 00:39:36.213 { 00:39:36.213 "code": -32602, 00:39:36.213 "message": "The specified target doesn'\''t exist, cannot delete it." 00:39:36.213 }' 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:39:36.213 { 00:39:36.213 "name": "foobar", 00:39:36.213 "method": "nvmf_delete_target", 00:39:36.213 "req_id": 1 00:39:36.213 } 00:39:36.213 Got JSON-RPC error response 00:39:36.213 response: 00:39:36.213 { 00:39:36.213 "code": -32602, 00:39:36.213 "message": "The specified target doesn't exist, cannot delete it." 00:39:36.213 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.213 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.213 rmmod nvme_tcp 00:39:36.213 rmmod nvme_fabrics 00:39:36.213 rmmod nvme_keyring 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 609166 ']' 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 609166 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 609166 ']' 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 609166 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609166 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609166' 00:39:36.471 killing process with pid 609166 00:39:36.471 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 609166 00:39:36.472 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 609166 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:39:36.731 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.732 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.638 00:39:38.638 real 0m9.220s 00:39:38.638 user 0m21.222s 00:39:38.638 sys 0m2.586s 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:39:38.638 ************************************ 00:39:38.638 END TEST nvmf_invalid 00:39:38.638 ************************************ 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:38.638 ************************************ 00:39:38.638 START TEST nvmf_connect_stress 00:39:38.638 ************************************ 00:39:38.638 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:39:38.896 * Looking for test storage... 00:39:38.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.896 --rc genhtml_branch_coverage=1 00:39:38.896 --rc genhtml_function_coverage=1 00:39:38.896 --rc genhtml_legend=1 00:39:38.896 --rc geninfo_all_blocks=1 00:39:38.896 --rc geninfo_unexecuted_blocks=1 00:39:38.896 00:39:38.896 ' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.896 --rc genhtml_branch_coverage=1 00:39:38.896 --rc genhtml_function_coverage=1 00:39:38.896 --rc genhtml_legend=1 00:39:38.896 --rc geninfo_all_blocks=1 00:39:38.896 --rc geninfo_unexecuted_blocks=1 00:39:38.896 00:39:38.896 ' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.896 --rc genhtml_branch_coverage=1 00:39:38.896 --rc genhtml_function_coverage=1 00:39:38.896 --rc genhtml_legend=1 00:39:38.896 --rc geninfo_all_blocks=1 00:39:38.896 --rc geninfo_unexecuted_blocks=1 00:39:38.896 00:39:38.896 ' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.896 --rc genhtml_branch_coverage=1 00:39:38.896 --rc genhtml_function_coverage=1 00:39:38.896 --rc genhtml_legend=1 00:39:38.896 --rc geninfo_all_blocks=1 00:39:38.896 --rc geninfo_unexecuted_blocks=1 00:39:38.896 00:39:38.896 ' 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.896 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.896 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:38.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.897 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:41.425 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:41.425 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.425 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:41.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:41.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:41.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:39:41.426 00:39:41.426 --- 10.0.0.2 ping statistics --- 00:39:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.426 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:39:41.426 00:39:41.426 --- 10.0.0.1 ping statistics --- 00:39:41.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.426 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=611812 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 611812 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 611812 ']' 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.426 [2024-12-09 05:34:35.380681] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:41.426 [2024-12-09 05:34:35.380763] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.426 [2024-12-09 05:34:35.453376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:41.426 [2024-12-09 05:34:35.511994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.426 [2024-12-09 05:34:35.512059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.426 [2024-12-09 05:34:35.512088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.426 [2024-12-09 05:34:35.512100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.426 [2024-12-09 05:34:35.512110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.426 [2024-12-09 05:34:35.513639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:41.426 [2024-12-09 05:34:35.513727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:41.426 [2024-12-09 05:34:35.513730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.426 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.684 [2024-12-09 05:34:35.661791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.684 [2024-12-09 05:34:35.679154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.684 NULL1 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=611876 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.684 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:41.940 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.940 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:41.940 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:41.940 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.940 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:42.197 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.197 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:42.197 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:42.197 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.197 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:42.815 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.815 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:42.815 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:42.815 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.815 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:43.109 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.109 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:43.109 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:43.109 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.109 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:43.382 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.382 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:43.382 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:43.382 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.382 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:43.644 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.644 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:43.644 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:43.644 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.644 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:43.901 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.901 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:43.901 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:43.901 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.901 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:44.158 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.158 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:44.158 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:44.158 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.158 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:44.415 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.415 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:44.415 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:44.415 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.415 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:44.979 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.979 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:44.979 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:44.979 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.979 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:45.236 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.236 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:45.236 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:45.236 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.236 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:45.493 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.493 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:45.493 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:45.493 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.493 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:45.750 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.750 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:45.750 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:45.750 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.750 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:46.312 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.312 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:46.312 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:46.312 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.312 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:46.569 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.569 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:46.569 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:46.569 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.569 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:46.825 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:46.825 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:46.825 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:46.825 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.825 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:47.082 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.082 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:47.082 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:47.082 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.082 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:47.339 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.339 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:47.339 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:47.339 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.339 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:47.901 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.901 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:47.901 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:47.901 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.901 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:48.158 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.158 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:48.158 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:48.158 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.158 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:48.414 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.414 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:48.414 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:48.414 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.414 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:48.670 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.670 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:48.670 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:48.670 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.670 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:48.926 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.927 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:48.927 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:48.927 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.927 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:49.490 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.490 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:49.490 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:49.490 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.490 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:49.747 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.747 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:49.747 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:49.747 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.747 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:50.004 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.004 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:50.004 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:50.004 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.004 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:50.261 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.262 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:50.262 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:50.262 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.262 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:50.519 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.519 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:50.519 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:50.519 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.519 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:51.083 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.083 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:51.083 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:51.083 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.083 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:51.342 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.342 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:51.342 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:51.342 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.342 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:51.599 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.599 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:51.599 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:39:51.599 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.599 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:51.856 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 611876 00:39:51.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (611876) - No such process 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 611876 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.856 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.856 rmmod nvme_tcp 00:39:51.856 rmmod nvme_fabrics 00:39:51.856 rmmod nvme_keyring 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 611812 ']' 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 611812 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 611812 ']' 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 611812 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 611812 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 611812' 00:39:52.114 killing process with pid 611812 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 611812 00:39:52.114 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 611812 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.373 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.278 00:39:54.278 real 0m15.611s 00:39:54.278 user 0m38.595s 00:39:54.278 sys 0m6.049s 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:39:54.278 ************************************ 00:39:54.278 END TEST nvmf_connect_stress 00:39:54.278 ************************************ 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.278 05:34:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:54.537 ************************************ 00:39:54.537 START TEST nvmf_fused_ordering 00:39:54.537 ************************************ 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:39:54.537 * Looking for test storage... 00:39:54.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:54.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.537 --rc genhtml_branch_coverage=1 00:39:54.537 --rc genhtml_function_coverage=1 00:39:54.537 --rc genhtml_legend=1 00:39:54.537 --rc geninfo_all_blocks=1 00:39:54.537 --rc geninfo_unexecuted_blocks=1 00:39:54.537 00:39:54.537 ' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:54.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.537 --rc genhtml_branch_coverage=1 00:39:54.537 --rc genhtml_function_coverage=1 00:39:54.537 --rc genhtml_legend=1 00:39:54.537 --rc geninfo_all_blocks=1 00:39:54.537 --rc geninfo_unexecuted_blocks=1 00:39:54.537 00:39:54.537 ' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:54.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.537 --rc genhtml_branch_coverage=1 00:39:54.537 --rc genhtml_function_coverage=1 00:39:54.537 --rc genhtml_legend=1 00:39:54.537 --rc geninfo_all_blocks=1 00:39:54.537 --rc geninfo_unexecuted_blocks=1 00:39:54.537 00:39:54.537 ' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:54.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.537 --rc genhtml_branch_coverage=1 00:39:54.537 --rc genhtml_function_coverage=1 00:39:54.537 --rc genhtml_legend=1 00:39:54.537 --rc geninfo_all_blocks=1 00:39:54.537 --rc geninfo_unexecuted_blocks=1 00:39:54.537 00:39:54.537 ' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.537 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:54.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:39:54.538 05:34:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:57.070 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:57.070 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:57.070 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:57.071 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:57.071 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:57.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:57.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:39:57.071 00:39:57.071 --- 10.0.0.2 ping statistics --- 00:39:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.071 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:57.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:57.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:39:57.071 00:39:57.071 --- 10.0.0.1 ping statistics --- 00:39:57.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.071 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=615125 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 615125 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 615125 ']' 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.071 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.072 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.072 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.072 [2024-12-09 05:34:51.041151] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:57.072 [2024-12-09 05:34:51.041242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:57.072 [2024-12-09 05:34:51.116791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.072 [2024-12-09 05:34:51.173308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:57.072 [2024-12-09 05:34:51.173376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:57.072 [2024-12-09 05:34:51.173405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:57.072 [2024-12-09 05:34:51.173417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:57.072 [2024-12-09 05:34:51.173428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:57.072 [2024-12-09 05:34:51.174099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.072 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.072 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:39:57.072 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.072 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:57.072 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 [2024-12-09 05:34:51.321434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 [2024-12-09 05:34:51.337680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 NULL1 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.329 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:57.329 [2024-12-09 05:34:51.382235] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:39:57.329 [2024-12-09 05:34:51.382295] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615146 ] 00:39:57.586 Attached to nqn.2016-06.io.spdk:cnode1 00:39:57.586 Namespace ID: 1 size: 1GB 00:39:57.586 fused_ordering(0) 00:39:57.586 fused_ordering(1) 00:39:57.586 fused_ordering(2) 00:39:57.586 fused_ordering(3) 00:39:57.586 fused_ordering(4) 00:39:57.586 fused_ordering(5) 00:39:57.586 fused_ordering(6) 00:39:57.586 fused_ordering(7) 00:39:57.586 fused_ordering(8) 00:39:57.586 fused_ordering(9) 00:39:57.586 fused_ordering(10) 00:39:57.586 fused_ordering(11) 00:39:57.586 fused_ordering(12) 00:39:57.586 fused_ordering(13) 00:39:57.586 fused_ordering(14) 00:39:57.586 fused_ordering(15) 00:39:57.586 fused_ordering(16) 00:39:57.586 fused_ordering(17) 00:39:57.586 fused_ordering(18) 00:39:57.586 fused_ordering(19) 00:39:57.586 fused_ordering(20) 00:39:57.586 fused_ordering(21) 00:39:57.586 fused_ordering(22) 00:39:57.586 fused_ordering(23) 00:39:57.586 fused_ordering(24) 00:39:57.586 fused_ordering(25) 00:39:57.586 fused_ordering(26) 00:39:57.586 fused_ordering(27) 00:39:57.586 fused_ordering(28) 00:39:57.586 fused_ordering(29) 00:39:57.586 fused_ordering(30) 00:39:57.586 fused_ordering(31) 00:39:57.586 fused_ordering(32) 00:39:57.586 fused_ordering(33) 00:39:57.586 fused_ordering(34) 00:39:57.586 fused_ordering(35) 00:39:57.586 fused_ordering(36) 00:39:57.586 fused_ordering(37) 00:39:57.586 fused_ordering(38) 00:39:57.586 fused_ordering(39) 00:39:57.586 fused_ordering(40) 00:39:57.586 fused_ordering(41) 00:39:57.586 fused_ordering(42) 00:39:57.586 fused_ordering(43) 00:39:57.586 fused_ordering(44) 00:39:57.586 fused_ordering(45) 00:39:57.586 fused_ordering(46) 00:39:57.586 fused_ordering(47) 00:39:57.586 fused_ordering(48) 00:39:57.586 fused_ordering(49) 00:39:57.586 fused_ordering(50) 00:39:57.586 fused_ordering(51) 00:39:57.586 fused_ordering(52) 00:39:57.586 fused_ordering(53) 00:39:57.586 fused_ordering(54) 00:39:57.586 fused_ordering(55) 00:39:57.586 fused_ordering(56) 00:39:57.586 fused_ordering(57) 00:39:57.586 fused_ordering(58) 00:39:57.586 fused_ordering(59) 00:39:57.586 fused_ordering(60) 00:39:57.586 fused_ordering(61) 00:39:57.587 fused_ordering(62) 00:39:57.587 fused_ordering(63) 00:39:57.587 fused_ordering(64) 00:39:57.587 fused_ordering(65) 00:39:57.587 fused_ordering(66) 00:39:57.587 fused_ordering(67) 00:39:57.587 fused_ordering(68) 00:39:57.587 fused_ordering(69) 00:39:57.587 fused_ordering(70) 00:39:57.587 fused_ordering(71) 00:39:57.587 fused_ordering(72) 00:39:57.587 fused_ordering(73) 00:39:57.587 fused_ordering(74) 00:39:57.587 fused_ordering(75) 00:39:57.587 fused_ordering(76) 00:39:57.587 fused_ordering(77) 00:39:57.587 fused_ordering(78) 00:39:57.587 fused_ordering(79) 00:39:57.587 fused_ordering(80) 00:39:57.587 fused_ordering(81) 00:39:57.587 fused_ordering(82) 00:39:57.587 fused_ordering(83) 00:39:57.587 fused_ordering(84) 00:39:57.587 fused_ordering(85) 00:39:57.587 fused_ordering(86) 00:39:57.587 fused_ordering(87) 00:39:57.587 fused_ordering(88) 00:39:57.587 fused_ordering(89) 00:39:57.587 fused_ordering(90) 00:39:57.587 fused_ordering(91) 00:39:57.587 fused_ordering(92) 00:39:57.587 fused_ordering(93) 00:39:57.587 fused_ordering(94) 00:39:57.587 fused_ordering(95) 00:39:57.587 fused_ordering(96) 00:39:57.587 fused_ordering(97) 00:39:57.587 fused_ordering(98) 00:39:57.587 fused_ordering(99) 00:39:57.587 fused_ordering(100) 00:39:57.587 fused_ordering(101) 00:39:57.587 fused_ordering(102) 00:39:57.587 fused_ordering(103) 00:39:57.587 fused_ordering(104) 00:39:57.587 fused_ordering(105) 00:39:57.587 fused_ordering(106) 00:39:57.587 fused_ordering(107) 00:39:57.587 fused_ordering(108) 00:39:57.587 fused_ordering(109) 00:39:57.587 fused_ordering(110) 00:39:57.587 fused_ordering(111) 00:39:57.587 fused_ordering(112) 00:39:57.587 fused_ordering(113) 00:39:57.587 fused_ordering(114) 00:39:57.587 fused_ordering(115) 00:39:57.587 fused_ordering(116) 00:39:57.587 fused_ordering(117) 00:39:57.587 fused_ordering(118) 00:39:57.587 fused_ordering(119) 00:39:57.587 fused_ordering(120) 00:39:57.587 fused_ordering(121) 00:39:57.587 fused_ordering(122) 00:39:57.587 fused_ordering(123) 00:39:57.587 fused_ordering(124) 00:39:57.587 fused_ordering(125) 00:39:57.587 fused_ordering(126) 00:39:57.587 fused_ordering(127) 00:39:57.587 fused_ordering(128) 00:39:57.587 fused_ordering(129) 00:39:57.587 fused_ordering(130) 00:39:57.587 fused_ordering(131) 00:39:57.587 fused_ordering(132) 00:39:57.587 fused_ordering(133) 00:39:57.587 fused_ordering(134) 00:39:57.587 fused_ordering(135) 00:39:57.587 fused_ordering(136) 00:39:57.587 fused_ordering(137) 00:39:57.587 fused_ordering(138) 00:39:57.587 fused_ordering(139) 00:39:57.587 fused_ordering(140) 00:39:57.587 fused_ordering(141) 00:39:57.587 fused_ordering(142) 00:39:57.587 fused_ordering(143) 00:39:57.587 fused_ordering(144) 00:39:57.587 fused_ordering(145) 00:39:57.587 fused_ordering(146) 00:39:57.587 fused_ordering(147) 00:39:57.587 fused_ordering(148) 00:39:57.587 fused_ordering(149) 00:39:57.587 fused_ordering(150) 00:39:57.587 fused_ordering(151) 00:39:57.587 fused_ordering(152) 00:39:57.587 fused_ordering(153) 00:39:57.587 fused_ordering(154) 00:39:57.587 fused_ordering(155) 00:39:57.587 fused_ordering(156) 00:39:57.587 fused_ordering(157) 00:39:57.587 fused_ordering(158) 00:39:57.587 fused_ordering(159) 00:39:57.587 fused_ordering(160) 00:39:57.587 fused_ordering(161) 00:39:57.587 fused_ordering(162) 00:39:57.587 fused_ordering(163) 00:39:57.587 fused_ordering(164) 00:39:57.587 fused_ordering(165) 00:39:57.587 fused_ordering(166) 00:39:57.587 fused_ordering(167) 00:39:57.587 fused_ordering(168) 00:39:57.587 fused_ordering(169) 00:39:57.587 fused_ordering(170) 00:39:57.587 fused_ordering(171) 00:39:57.587 fused_ordering(172) 00:39:57.587 fused_ordering(173) 00:39:57.587 fused_ordering(174) 00:39:57.587 fused_ordering(175) 00:39:57.587 fused_ordering(176) 00:39:57.587 fused_ordering(177) 00:39:57.587 fused_ordering(178) 00:39:57.587 fused_ordering(179) 00:39:57.587 fused_ordering(180) 00:39:57.587 fused_ordering(181) 00:39:57.587 fused_ordering(182) 00:39:57.587 fused_ordering(183) 00:39:57.587 fused_ordering(184) 00:39:57.587 fused_ordering(185) 00:39:57.587 fused_ordering(186) 00:39:57.587 fused_ordering(187) 00:39:57.587 fused_ordering(188) 00:39:57.587 fused_ordering(189) 00:39:57.587 fused_ordering(190) 00:39:57.587 fused_ordering(191) 00:39:57.587 fused_ordering(192) 00:39:57.587 fused_ordering(193) 00:39:57.587 fused_ordering(194) 00:39:57.587 fused_ordering(195) 00:39:57.587 fused_ordering(196) 00:39:57.587 fused_ordering(197) 00:39:57.587 fused_ordering(198) 00:39:57.587 fused_ordering(199) 00:39:57.587 fused_ordering(200) 00:39:57.587 fused_ordering(201) 00:39:57.587 fused_ordering(202) 00:39:57.587 fused_ordering(203) 00:39:57.587 fused_ordering(204) 00:39:57.587 fused_ordering(205) 00:39:58.151 fused_ordering(206) 00:39:58.151 fused_ordering(207) 00:39:58.151 fused_ordering(208) 00:39:58.151 fused_ordering(209) 00:39:58.151 fused_ordering(210) 00:39:58.151 fused_ordering(211) 00:39:58.151 fused_ordering(212) 00:39:58.151 fused_ordering(213) 00:39:58.151 fused_ordering(214) 00:39:58.151 fused_ordering(215) 00:39:58.151 fused_ordering(216) 00:39:58.151 fused_ordering(217) 00:39:58.151 fused_ordering(218) 00:39:58.151 fused_ordering(219) 00:39:58.151 fused_ordering(220) 00:39:58.151 fused_ordering(221) 00:39:58.151 fused_ordering(222) 00:39:58.151 fused_ordering(223) 00:39:58.151 fused_ordering(224) 00:39:58.151 fused_ordering(225) 00:39:58.151 fused_ordering(226) 00:39:58.151 fused_ordering(227) 00:39:58.151 fused_ordering(228) 00:39:58.151 fused_ordering(229) 00:39:58.151 fused_ordering(230) 00:39:58.151 fused_ordering(231) 00:39:58.151 fused_ordering(232) 00:39:58.151 fused_ordering(233) 00:39:58.151 fused_ordering(234) 00:39:58.151 fused_ordering(235) 00:39:58.151 fused_ordering(236) 00:39:58.151 fused_ordering(237) 00:39:58.151 fused_ordering(238) 00:39:58.151 fused_ordering(239) 00:39:58.151 fused_ordering(240) 00:39:58.151 fused_ordering(241) 00:39:58.151 fused_ordering(242) 00:39:58.151 fused_ordering(243) 00:39:58.151 fused_ordering(244) 00:39:58.151 fused_ordering(245) 00:39:58.151 fused_ordering(246) 00:39:58.151 fused_ordering(247) 00:39:58.151 fused_ordering(248) 00:39:58.151 fused_ordering(249) 00:39:58.151 fused_ordering(250) 00:39:58.151 fused_ordering(251) 00:39:58.151 fused_ordering(252) 00:39:58.151 fused_ordering(253) 00:39:58.151 fused_ordering(254) 00:39:58.151 fused_ordering(255) 00:39:58.151 fused_ordering(256) 00:39:58.151 fused_ordering(257) 00:39:58.151 fused_ordering(258) 00:39:58.151 fused_ordering(259) 00:39:58.151 fused_ordering(260) 00:39:58.151 fused_ordering(261) 00:39:58.151 fused_ordering(262) 00:39:58.151 fused_ordering(263) 00:39:58.151 fused_ordering(264) 00:39:58.151 fused_ordering(265) 00:39:58.151 fused_ordering(266) 00:39:58.151 fused_ordering(267) 00:39:58.151 fused_ordering(268) 00:39:58.151 fused_ordering(269) 00:39:58.151 fused_ordering(270) 00:39:58.151 fused_ordering(271) 00:39:58.151 fused_ordering(272) 00:39:58.151 fused_ordering(273) 00:39:58.151 fused_ordering(274) 00:39:58.151 fused_ordering(275) 00:39:58.151 fused_ordering(276) 00:39:58.151 fused_ordering(277) 00:39:58.151 fused_ordering(278) 00:39:58.151 fused_ordering(279) 00:39:58.151 fused_ordering(280) 00:39:58.151 fused_ordering(281) 00:39:58.151 fused_ordering(282) 00:39:58.151 fused_ordering(283) 00:39:58.151 fused_ordering(284) 00:39:58.151 fused_ordering(285) 00:39:58.151 fused_ordering(286) 00:39:58.151 fused_ordering(287) 00:39:58.151 fused_ordering(288) 00:39:58.151 fused_ordering(289) 00:39:58.151 fused_ordering(290) 00:39:58.151 fused_ordering(291) 00:39:58.151 fused_ordering(292) 00:39:58.151 fused_ordering(293) 00:39:58.151 fused_ordering(294) 00:39:58.151 fused_ordering(295) 00:39:58.151 fused_ordering(296) 00:39:58.152 fused_ordering(297) 00:39:58.152 fused_ordering(298) 00:39:58.152 fused_ordering(299) 00:39:58.152 fused_ordering(300) 00:39:58.152 fused_ordering(301) 00:39:58.152 fused_ordering(302) 00:39:58.152 fused_ordering(303) 00:39:58.152 fused_ordering(304) 00:39:58.152 fused_ordering(305) 00:39:58.152 fused_ordering(306) 00:39:58.152 fused_ordering(307) 00:39:58.152 fused_ordering(308) 00:39:58.152 fused_ordering(309) 00:39:58.152 fused_ordering(310) 00:39:58.152 fused_ordering(311) 00:39:58.152 fused_ordering(312) 00:39:58.152 fused_ordering(313) 00:39:58.152 fused_ordering(314) 00:39:58.152 fused_ordering(315) 00:39:58.152 fused_ordering(316) 00:39:58.152 fused_ordering(317) 00:39:58.152 fused_ordering(318) 00:39:58.152 fused_ordering(319) 00:39:58.152 fused_ordering(320) 00:39:58.152 fused_ordering(321) 00:39:58.152 fused_ordering(322) 00:39:58.152 fused_ordering(323) 00:39:58.152 fused_ordering(324) 00:39:58.152 fused_ordering(325) 00:39:58.152 fused_ordering(326) 00:39:58.152 fused_ordering(327) 00:39:58.152 fused_ordering(328) 00:39:58.152 fused_ordering(329) 00:39:58.152 fused_ordering(330) 00:39:58.152 fused_ordering(331) 00:39:58.152 fused_ordering(332) 00:39:58.152 fused_ordering(333) 00:39:58.152 fused_ordering(334) 00:39:58.152 fused_ordering(335) 00:39:58.152 fused_ordering(336) 00:39:58.152 fused_ordering(337) 00:39:58.152 fused_ordering(338) 00:39:58.152 fused_ordering(339) 00:39:58.152 fused_ordering(340) 00:39:58.152 fused_ordering(341) 00:39:58.152 fused_ordering(342) 00:39:58.152 fused_ordering(343) 00:39:58.152 fused_ordering(344) 00:39:58.152 fused_ordering(345) 00:39:58.152 fused_ordering(346) 00:39:58.152 fused_ordering(347) 00:39:58.152 fused_ordering(348) 00:39:58.152 fused_ordering(349) 00:39:58.152 fused_ordering(350) 00:39:58.152 fused_ordering(351) 00:39:58.152 fused_ordering(352) 00:39:58.152 fused_ordering(353) 00:39:58.152 fused_ordering(354) 00:39:58.152 fused_ordering(355) 00:39:58.152 fused_ordering(356) 00:39:58.152 fused_ordering(357) 00:39:58.152 fused_ordering(358) 00:39:58.152 fused_ordering(359) 00:39:58.152 fused_ordering(360) 00:39:58.152 fused_ordering(361) 00:39:58.152 fused_ordering(362) 00:39:58.152 fused_ordering(363) 00:39:58.152 fused_ordering(364) 00:39:58.152 fused_ordering(365) 00:39:58.152 fused_ordering(366) 00:39:58.152 fused_ordering(367) 00:39:58.152 fused_ordering(368) 00:39:58.152 fused_ordering(369) 00:39:58.152 fused_ordering(370) 00:39:58.152 fused_ordering(371) 00:39:58.152 fused_ordering(372) 00:39:58.152 fused_ordering(373) 00:39:58.152 fused_ordering(374) 00:39:58.152 fused_ordering(375) 00:39:58.152 fused_ordering(376) 00:39:58.152 fused_ordering(377) 00:39:58.152 fused_ordering(378) 00:39:58.152 fused_ordering(379) 00:39:58.152 fused_ordering(380) 00:39:58.152 fused_ordering(381) 00:39:58.152 fused_ordering(382) 00:39:58.152 fused_ordering(383) 00:39:58.152 fused_ordering(384) 00:39:58.152 fused_ordering(385) 00:39:58.152 fused_ordering(386) 00:39:58.152 fused_ordering(387) 00:39:58.152 fused_ordering(388) 00:39:58.152 fused_ordering(389) 00:39:58.152 fused_ordering(390) 00:39:58.152 fused_ordering(391) 00:39:58.152 fused_ordering(392) 00:39:58.152 fused_ordering(393) 00:39:58.152 fused_ordering(394) 00:39:58.152 fused_ordering(395) 00:39:58.152 fused_ordering(396) 00:39:58.152 fused_ordering(397) 00:39:58.152 fused_ordering(398) 00:39:58.152 fused_ordering(399) 00:39:58.152 fused_ordering(400) 00:39:58.152 fused_ordering(401) 00:39:58.152 fused_ordering(402) 00:39:58.152 fused_ordering(403) 00:39:58.152 fused_ordering(404) 00:39:58.152 fused_ordering(405) 00:39:58.152 fused_ordering(406) 00:39:58.152 fused_ordering(407) 00:39:58.152 fused_ordering(408) 00:39:58.152 fused_ordering(409) 00:39:58.152 fused_ordering(410) 00:39:58.409 fused_ordering(411) 00:39:58.409 fused_ordering(412) 00:39:58.409 fused_ordering(413) 00:39:58.409 fused_ordering(414) 00:39:58.409 fused_ordering(415) 00:39:58.409 fused_ordering(416) 00:39:58.409 fused_ordering(417) 00:39:58.409 fused_ordering(418) 00:39:58.409 fused_ordering(419) 00:39:58.409 fused_ordering(420) 00:39:58.409 fused_ordering(421) 00:39:58.409 fused_ordering(422) 00:39:58.409 fused_ordering(423) 00:39:58.409 fused_ordering(424) 00:39:58.409 fused_ordering(425) 00:39:58.409 fused_ordering(426) 00:39:58.409 fused_ordering(427) 00:39:58.409 fused_ordering(428) 00:39:58.409 fused_ordering(429) 00:39:58.409 fused_ordering(430) 00:39:58.409 fused_ordering(431) 00:39:58.409 fused_ordering(432) 00:39:58.409 fused_ordering(433) 00:39:58.409 fused_ordering(434) 00:39:58.409 fused_ordering(435) 00:39:58.409 fused_ordering(436) 00:39:58.409 fused_ordering(437) 00:39:58.409 fused_ordering(438) 00:39:58.409 fused_ordering(439) 00:39:58.409 fused_ordering(440) 00:39:58.409 fused_ordering(441) 00:39:58.409 fused_ordering(442) 00:39:58.409 fused_ordering(443) 00:39:58.409 fused_ordering(444) 00:39:58.409 fused_ordering(445) 00:39:58.409 fused_ordering(446) 00:39:58.409 fused_ordering(447) 00:39:58.409 fused_ordering(448) 00:39:58.409 fused_ordering(449) 00:39:58.409 fused_ordering(450) 00:39:58.409 fused_ordering(451) 00:39:58.409 fused_ordering(452) 00:39:58.409 fused_ordering(453) 00:39:58.409 fused_ordering(454) 00:39:58.409 fused_ordering(455) 00:39:58.409 fused_ordering(456) 00:39:58.409 fused_ordering(457) 00:39:58.409 fused_ordering(458) 00:39:58.409 fused_ordering(459) 00:39:58.409 fused_ordering(460) 00:39:58.409 fused_ordering(461) 00:39:58.409 fused_ordering(462) 00:39:58.409 fused_ordering(463) 00:39:58.409 fused_ordering(464) 00:39:58.409 fused_ordering(465) 00:39:58.409 fused_ordering(466) 00:39:58.409 fused_ordering(467) 00:39:58.409 fused_ordering(468) 00:39:58.409 fused_ordering(469) 00:39:58.409 fused_ordering(470) 00:39:58.409 fused_ordering(471) 00:39:58.409 fused_ordering(472) 00:39:58.409 fused_ordering(473) 00:39:58.409 fused_ordering(474) 00:39:58.409 fused_ordering(475) 00:39:58.409 fused_ordering(476) 00:39:58.409 fused_ordering(477) 00:39:58.409 fused_ordering(478) 00:39:58.409 fused_ordering(479) 00:39:58.409 fused_ordering(480) 00:39:58.409 fused_ordering(481) 00:39:58.409 fused_ordering(482) 00:39:58.409 fused_ordering(483) 00:39:58.409 fused_ordering(484) 00:39:58.409 fused_ordering(485) 00:39:58.409 fused_ordering(486) 00:39:58.409 fused_ordering(487) 00:39:58.409 fused_ordering(488) 00:39:58.409 fused_ordering(489) 00:39:58.409 fused_ordering(490) 00:39:58.409 fused_ordering(491) 00:39:58.409 fused_ordering(492) 00:39:58.409 fused_ordering(493) 00:39:58.409 fused_ordering(494) 00:39:58.409 fused_ordering(495) 00:39:58.409 fused_ordering(496) 00:39:58.409 fused_ordering(497) 00:39:58.409 fused_ordering(498) 00:39:58.409 fused_ordering(499) 00:39:58.409 fused_ordering(500) 00:39:58.409 fused_ordering(501) 00:39:58.409 fused_ordering(502) 00:39:58.409 fused_ordering(503) 00:39:58.409 fused_ordering(504) 00:39:58.409 fused_ordering(505) 00:39:58.409 fused_ordering(506) 00:39:58.409 fused_ordering(507) 00:39:58.409 fused_ordering(508) 00:39:58.409 fused_ordering(509) 00:39:58.409 fused_ordering(510) 00:39:58.409 fused_ordering(511) 00:39:58.409 fused_ordering(512) 00:39:58.409 fused_ordering(513) 00:39:58.409 fused_ordering(514) 00:39:58.409 fused_ordering(515) 00:39:58.409 fused_ordering(516) 00:39:58.410 fused_ordering(517) 00:39:58.410 fused_ordering(518) 00:39:58.410 fused_ordering(519) 00:39:58.410 fused_ordering(520) 00:39:58.410 fused_ordering(521) 00:39:58.410 fused_ordering(522) 00:39:58.410 fused_ordering(523) 00:39:58.410 fused_ordering(524) 00:39:58.410 fused_ordering(525) 00:39:58.410 fused_ordering(526) 00:39:58.410 fused_ordering(527) 00:39:58.410 fused_ordering(528) 00:39:58.410 fused_ordering(529) 00:39:58.410 fused_ordering(530) 00:39:58.410 fused_ordering(531) 00:39:58.410 fused_ordering(532) 00:39:58.410 fused_ordering(533) 00:39:58.410 fused_ordering(534) 00:39:58.410 fused_ordering(535) 00:39:58.410 fused_ordering(536) 00:39:58.410 fused_ordering(537) 00:39:58.410 fused_ordering(538) 00:39:58.410 fused_ordering(539) 00:39:58.410 fused_ordering(540) 00:39:58.410 fused_ordering(541) 00:39:58.410 fused_ordering(542) 00:39:58.410 fused_ordering(543) 00:39:58.410 fused_ordering(544) 00:39:58.410 fused_ordering(545) 00:39:58.410 fused_ordering(546) 00:39:58.410 fused_ordering(547) 00:39:58.410 fused_ordering(548) 00:39:58.410 fused_ordering(549) 00:39:58.410 fused_ordering(550) 00:39:58.410 fused_ordering(551) 00:39:58.410 fused_ordering(552) 00:39:58.410 fused_ordering(553) 00:39:58.410 fused_ordering(554) 00:39:58.410 fused_ordering(555) 00:39:58.410 fused_ordering(556) 00:39:58.410 fused_ordering(557) 00:39:58.410 fused_ordering(558) 00:39:58.410 fused_ordering(559) 00:39:58.410 fused_ordering(560) 00:39:58.410 fused_ordering(561) 00:39:58.410 fused_ordering(562) 00:39:58.410 fused_ordering(563) 00:39:58.410 fused_ordering(564) 00:39:58.410 fused_ordering(565) 00:39:58.410 fused_ordering(566) 00:39:58.410 fused_ordering(567) 00:39:58.410 fused_ordering(568) 00:39:58.410 fused_ordering(569) 00:39:58.410 fused_ordering(570) 00:39:58.410 fused_ordering(571) 00:39:58.410 fused_ordering(572) 00:39:58.410 fused_ordering(573) 00:39:58.410 fused_ordering(574) 00:39:58.410 fused_ordering(575) 00:39:58.410 fused_ordering(576) 00:39:58.410 fused_ordering(577) 00:39:58.410 fused_ordering(578) 00:39:58.410 fused_ordering(579) 00:39:58.410 fused_ordering(580) 00:39:58.410 fused_ordering(581) 00:39:58.410 fused_ordering(582) 00:39:58.410 fused_ordering(583) 00:39:58.410 fused_ordering(584) 00:39:58.410 fused_ordering(585) 00:39:58.410 fused_ordering(586) 00:39:58.410 fused_ordering(587) 00:39:58.410 fused_ordering(588) 00:39:58.410 fused_ordering(589) 00:39:58.410 fused_ordering(590) 00:39:58.410 fused_ordering(591) 00:39:58.410 fused_ordering(592) 00:39:58.410 fused_ordering(593) 00:39:58.410 fused_ordering(594) 00:39:58.410 fused_ordering(595) 00:39:58.410 fused_ordering(596) 00:39:58.410 fused_ordering(597) 00:39:58.410 fused_ordering(598) 00:39:58.410 fused_ordering(599) 00:39:58.410 fused_ordering(600) 00:39:58.410 fused_ordering(601) 00:39:58.410 fused_ordering(602) 00:39:58.410 fused_ordering(603) 00:39:58.410 fused_ordering(604) 00:39:58.410 fused_ordering(605) 00:39:58.410 fused_ordering(606) 00:39:58.410 fused_ordering(607) 00:39:58.410 fused_ordering(608) 00:39:58.410 fused_ordering(609) 00:39:58.410 fused_ordering(610) 00:39:58.410 fused_ordering(611) 00:39:58.410 fused_ordering(612) 00:39:58.410 fused_ordering(613) 00:39:58.410 fused_ordering(614) 00:39:58.410 fused_ordering(615) 00:39:58.973 fused_ordering(616) 00:39:58.973 fused_ordering(617) 00:39:58.973 fused_ordering(618) 00:39:58.973 fused_ordering(619) 00:39:58.973 fused_ordering(620) 00:39:58.973 fused_ordering(621) 00:39:58.973 fused_ordering(622) 00:39:58.973 fused_ordering(623) 00:39:58.973 fused_ordering(624) 00:39:58.973 fused_ordering(625) 00:39:58.973 fused_ordering(626) 00:39:58.973 fused_ordering(627) 00:39:58.973 fused_ordering(628) 00:39:58.973 fused_ordering(629) 00:39:58.973 fused_ordering(630) 00:39:58.973 fused_ordering(631) 00:39:58.973 fused_ordering(632) 00:39:58.973 fused_ordering(633) 00:39:58.973 fused_ordering(634) 00:39:58.973 fused_ordering(635) 00:39:58.973 fused_ordering(636) 00:39:58.973 fused_ordering(637) 00:39:58.973 fused_ordering(638) 00:39:58.973 fused_ordering(639) 00:39:58.973 fused_ordering(640) 00:39:58.973 fused_ordering(641) 00:39:58.973 fused_ordering(642) 00:39:58.973 fused_ordering(643) 00:39:58.973 fused_ordering(644) 00:39:58.973 fused_ordering(645) 00:39:58.973 fused_ordering(646) 00:39:58.973 fused_ordering(647) 00:39:58.973 fused_ordering(648) 00:39:58.973 fused_ordering(649) 00:39:58.973 fused_ordering(650) 00:39:58.973 fused_ordering(651) 00:39:58.973 fused_ordering(652) 00:39:58.973 fused_ordering(653) 00:39:58.973 fused_ordering(654) 00:39:58.973 fused_ordering(655) 00:39:58.973 fused_ordering(656) 00:39:58.973 fused_ordering(657) 00:39:58.973 fused_ordering(658) 00:39:58.973 fused_ordering(659) 00:39:58.973 fused_ordering(660) 00:39:58.973 fused_ordering(661) 00:39:58.973 fused_ordering(662) 00:39:58.973 fused_ordering(663) 00:39:58.973 fused_ordering(664) 00:39:58.973 fused_ordering(665) 00:39:58.973 fused_ordering(666) 00:39:58.973 fused_ordering(667) 00:39:58.973 fused_ordering(668) 00:39:58.973 fused_ordering(669) 00:39:58.973 fused_ordering(670) 00:39:58.973 fused_ordering(671) 00:39:58.973 fused_ordering(672) 00:39:58.973 fused_ordering(673) 00:39:58.973 fused_ordering(674) 00:39:58.973 fused_ordering(675) 00:39:58.973 fused_ordering(676) 00:39:58.973 fused_ordering(677) 00:39:58.973 fused_ordering(678) 00:39:58.973 fused_ordering(679) 00:39:58.973 fused_ordering(680) 00:39:58.973 fused_ordering(681) 00:39:58.973 fused_ordering(682) 00:39:58.973 fused_ordering(683) 00:39:58.973 fused_ordering(684) 00:39:58.973 fused_ordering(685) 00:39:58.973 fused_ordering(686) 00:39:58.973 fused_ordering(687) 00:39:58.973 fused_ordering(688) 00:39:58.973 fused_ordering(689) 00:39:58.973 fused_ordering(690) 00:39:58.973 fused_ordering(691) 00:39:58.973 fused_ordering(692) 00:39:58.973 fused_ordering(693) 00:39:58.973 fused_ordering(694) 00:39:58.973 fused_ordering(695) 00:39:58.973 fused_ordering(696) 00:39:58.973 fused_ordering(697) 00:39:58.973 fused_ordering(698) 00:39:58.973 fused_ordering(699) 00:39:58.973 fused_ordering(700) 00:39:58.973 fused_ordering(701) 00:39:58.973 fused_ordering(702) 00:39:58.973 fused_ordering(703) 00:39:58.973 fused_ordering(704) 00:39:58.973 fused_ordering(705) 00:39:58.973 fused_ordering(706) 00:39:58.973 fused_ordering(707) 00:39:58.973 fused_ordering(708) 00:39:58.973 fused_ordering(709) 00:39:58.973 fused_ordering(710) 00:39:58.973 fused_ordering(711) 00:39:58.973 fused_ordering(712) 00:39:58.973 fused_ordering(713) 00:39:58.973 fused_ordering(714) 00:39:58.973 fused_ordering(715) 00:39:58.973 fused_ordering(716) 00:39:58.973 fused_ordering(717) 00:39:58.973 fused_ordering(718) 00:39:58.974 fused_ordering(719) 00:39:58.974 fused_ordering(720) 00:39:58.974 fused_ordering(721) 00:39:58.974 fused_ordering(722) 00:39:58.974 fused_ordering(723) 00:39:58.974 fused_ordering(724) 00:39:58.974 fused_ordering(725) 00:39:58.974 fused_ordering(726) 00:39:58.974 fused_ordering(727) 00:39:58.974 fused_ordering(728) 00:39:58.974 fused_ordering(729) 00:39:58.974 fused_ordering(730) 00:39:58.974 fused_ordering(731) 00:39:58.974 fused_ordering(732) 00:39:58.974 fused_ordering(733) 00:39:58.974 fused_ordering(734) 00:39:58.974 fused_ordering(735) 00:39:58.974 fused_ordering(736) 00:39:58.974 fused_ordering(737) 00:39:58.974 fused_ordering(738) 00:39:58.974 fused_ordering(739) 00:39:58.974 fused_ordering(740) 00:39:58.974 fused_ordering(741) 00:39:58.974 fused_ordering(742) 00:39:58.974 fused_ordering(743) 00:39:58.974 fused_ordering(744) 00:39:58.974 fused_ordering(745) 00:39:58.974 fused_ordering(746) 00:39:58.974 fused_ordering(747) 00:39:58.974 fused_ordering(748) 00:39:58.974 fused_ordering(749) 00:39:58.974 fused_ordering(750) 00:39:58.974 fused_ordering(751) 00:39:58.974 fused_ordering(752) 00:39:58.974 fused_ordering(753) 00:39:58.974 fused_ordering(754) 00:39:58.974 fused_ordering(755) 00:39:58.974 fused_ordering(756) 00:39:58.974 fused_ordering(757) 00:39:58.974 fused_ordering(758) 00:39:58.974 fused_ordering(759) 00:39:58.974 fused_ordering(760) 00:39:58.974 fused_ordering(761) 00:39:58.974 fused_ordering(762) 00:39:58.974 fused_ordering(763) 00:39:58.974 fused_ordering(764) 00:39:58.974 fused_ordering(765) 00:39:58.974 fused_ordering(766) 00:39:58.974 fused_ordering(767) 00:39:58.974 fused_ordering(768) 00:39:58.974 fused_ordering(769) 00:39:58.974 fused_ordering(770) 00:39:58.974 fused_ordering(771) 00:39:58.974 fused_ordering(772) 00:39:58.974 fused_ordering(773) 00:39:58.974 fused_ordering(774) 00:39:58.974 fused_ordering(775) 00:39:58.974 fused_ordering(776) 00:39:58.974 fused_ordering(777) 00:39:58.974 fused_ordering(778) 00:39:58.974 fused_ordering(779) 00:39:58.974 fused_ordering(780) 00:39:58.974 fused_ordering(781) 00:39:58.974 fused_ordering(782) 00:39:58.974 fused_ordering(783) 00:39:58.974 fused_ordering(784) 00:39:58.974 fused_ordering(785) 00:39:58.974 fused_ordering(786) 00:39:58.974 fused_ordering(787) 00:39:58.974 fused_ordering(788) 00:39:58.974 fused_ordering(789) 00:39:58.974 fused_ordering(790) 00:39:58.974 fused_ordering(791) 00:39:58.974 fused_ordering(792) 00:39:58.974 fused_ordering(793) 00:39:58.974 fused_ordering(794) 00:39:58.974 fused_ordering(795) 00:39:58.974 fused_ordering(796) 00:39:58.974 fused_ordering(797) 00:39:58.974 fused_ordering(798) 00:39:58.974 fused_ordering(799) 00:39:58.974 fused_ordering(800) 00:39:58.974 fused_ordering(801) 00:39:58.974 fused_ordering(802) 00:39:58.974 fused_ordering(803) 00:39:58.974 fused_ordering(804) 00:39:58.974 fused_ordering(805) 00:39:58.974 fused_ordering(806) 00:39:58.974 fused_ordering(807) 00:39:58.974 fused_ordering(808) 00:39:58.974 fused_ordering(809) 00:39:58.974 fused_ordering(810) 00:39:58.974 fused_ordering(811) 00:39:58.974 fused_ordering(812) 00:39:58.974 fused_ordering(813) 00:39:58.974 fused_ordering(814) 00:39:58.974 fused_ordering(815) 00:39:58.974 fused_ordering(816) 00:39:58.974 fused_ordering(817) 00:39:58.974 fused_ordering(818) 00:39:58.974 fused_ordering(819) 00:39:58.974 fused_ordering(820) 00:39:59.537 fused_ordering(821) 00:39:59.537 fused_ordering(822) 00:39:59.537 fused_ordering(823) 00:39:59.537 fused_ordering(824) 00:39:59.537 fused_ordering(825) 00:39:59.537 fused_ordering(826) 00:39:59.537 fused_ordering(827) 00:39:59.537 fused_ordering(828) 00:39:59.537 fused_ordering(829) 00:39:59.537 fused_ordering(830) 00:39:59.537 fused_ordering(831) 00:39:59.537 fused_ordering(832) 00:39:59.537 fused_ordering(833) 00:39:59.537 fused_ordering(834) 00:39:59.537 fused_ordering(835) 00:39:59.537 fused_ordering(836) 00:39:59.537 fused_ordering(837) 00:39:59.537 fused_ordering(838) 00:39:59.537 fused_ordering(839) 00:39:59.537 fused_ordering(840) 00:39:59.537 fused_ordering(841) 00:39:59.537 fused_ordering(842) 00:39:59.537 fused_ordering(843) 00:39:59.537 fused_ordering(844) 00:39:59.537 fused_ordering(845) 00:39:59.537 fused_ordering(846) 00:39:59.537 fused_ordering(847) 00:39:59.537 fused_ordering(848) 00:39:59.537 fused_ordering(849) 00:39:59.537 fused_ordering(850) 00:39:59.537 fused_ordering(851) 00:39:59.537 fused_ordering(852) 00:39:59.537 fused_ordering(853) 00:39:59.537 fused_ordering(854) 00:39:59.537 fused_ordering(855) 00:39:59.537 fused_ordering(856) 00:39:59.537 fused_ordering(857) 00:39:59.537 fused_ordering(858) 00:39:59.537 fused_ordering(859) 00:39:59.537 fused_ordering(860) 00:39:59.537 fused_ordering(861) 00:39:59.537 fused_ordering(862) 00:39:59.537 fused_ordering(863) 00:39:59.537 fused_ordering(864) 00:39:59.537 fused_ordering(865) 00:39:59.537 fused_ordering(866) 00:39:59.537 fused_ordering(867) 00:39:59.537 fused_ordering(868) 00:39:59.537 fused_ordering(869) 00:39:59.537 fused_ordering(870) 00:39:59.537 fused_ordering(871) 00:39:59.537 fused_ordering(872) 00:39:59.537 fused_ordering(873) 00:39:59.537 fused_ordering(874) 00:39:59.537 fused_ordering(875) 00:39:59.537 fused_ordering(876) 00:39:59.537 fused_ordering(877) 00:39:59.537 fused_ordering(878) 00:39:59.537 fused_ordering(879) 00:39:59.537 fused_ordering(880) 00:39:59.537 fused_ordering(881) 00:39:59.537 fused_ordering(882) 00:39:59.537 fused_ordering(883) 00:39:59.537 fused_ordering(884) 00:39:59.537 fused_ordering(885) 00:39:59.537 fused_ordering(886) 00:39:59.537 fused_ordering(887) 00:39:59.537 fused_ordering(888) 00:39:59.537 fused_ordering(889) 00:39:59.537 fused_ordering(890) 00:39:59.537 fused_ordering(891) 00:39:59.537 fused_ordering(892) 00:39:59.537 fused_ordering(893) 00:39:59.537 fused_ordering(894) 00:39:59.537 fused_ordering(895) 00:39:59.537 fused_ordering(896) 00:39:59.537 fused_ordering(897) 00:39:59.537 fused_ordering(898) 00:39:59.537 fused_ordering(899) 00:39:59.537 fused_ordering(900) 00:39:59.537 fused_ordering(901) 00:39:59.537 fused_ordering(902) 00:39:59.537 fused_ordering(903) 00:39:59.537 fused_ordering(904) 00:39:59.537 fused_ordering(905) 00:39:59.537 fused_ordering(906) 00:39:59.537 fused_ordering(907) 00:39:59.537 fused_ordering(908) 00:39:59.537 fused_ordering(909) 00:39:59.537 fused_ordering(910) 00:39:59.537 fused_ordering(911) 00:39:59.537 fused_ordering(912) 00:39:59.537 fused_ordering(913) 00:39:59.537 fused_ordering(914) 00:39:59.537 fused_ordering(915) 00:39:59.537 fused_ordering(916) 00:39:59.537 fused_ordering(917) 00:39:59.537 fused_ordering(918) 00:39:59.537 fused_ordering(919) 00:39:59.537 fused_ordering(920) 00:39:59.537 fused_ordering(921) 00:39:59.537 fused_ordering(922) 00:39:59.537 fused_ordering(923) 00:39:59.537 fused_ordering(924) 00:39:59.537 fused_ordering(925) 00:39:59.537 fused_ordering(926) 00:39:59.537 fused_ordering(927) 00:39:59.537 fused_ordering(928) 00:39:59.537 fused_ordering(929) 00:39:59.537 fused_ordering(930) 00:39:59.537 fused_ordering(931) 00:39:59.537 fused_ordering(932) 00:39:59.537 fused_ordering(933) 00:39:59.537 fused_ordering(934) 00:39:59.537 fused_ordering(935) 00:39:59.537 fused_ordering(936) 00:39:59.537 fused_ordering(937) 00:39:59.537 fused_ordering(938) 00:39:59.537 fused_ordering(939) 00:39:59.537 fused_ordering(940) 00:39:59.537 fused_ordering(941) 00:39:59.537 fused_ordering(942) 00:39:59.537 fused_ordering(943) 00:39:59.537 fused_ordering(944) 00:39:59.537 fused_ordering(945) 00:39:59.537 fused_ordering(946) 00:39:59.537 fused_ordering(947) 00:39:59.537 fused_ordering(948) 00:39:59.537 fused_ordering(949) 00:39:59.537 fused_ordering(950) 00:39:59.537 fused_ordering(951) 00:39:59.537 fused_ordering(952) 00:39:59.537 fused_ordering(953) 00:39:59.537 fused_ordering(954) 00:39:59.537 fused_ordering(955) 00:39:59.537 fused_ordering(956) 00:39:59.537 fused_ordering(957) 00:39:59.537 fused_ordering(958) 00:39:59.537 fused_ordering(959) 00:39:59.537 fused_ordering(960) 00:39:59.537 fused_ordering(961) 00:39:59.537 fused_ordering(962) 00:39:59.537 fused_ordering(963) 00:39:59.537 fused_ordering(964) 00:39:59.537 fused_ordering(965) 00:39:59.537 fused_ordering(966) 00:39:59.537 fused_ordering(967) 00:39:59.537 fused_ordering(968) 00:39:59.537 fused_ordering(969) 00:39:59.537 fused_ordering(970) 00:39:59.537 fused_ordering(971) 00:39:59.537 fused_ordering(972) 00:39:59.537 fused_ordering(973) 00:39:59.537 fused_ordering(974) 00:39:59.537 fused_ordering(975) 00:39:59.537 fused_ordering(976) 00:39:59.537 fused_ordering(977) 00:39:59.537 fused_ordering(978) 00:39:59.537 fused_ordering(979) 00:39:59.537 fused_ordering(980) 00:39:59.537 fused_ordering(981) 00:39:59.537 fused_ordering(982) 00:39:59.537 fused_ordering(983) 00:39:59.537 fused_ordering(984) 00:39:59.537 fused_ordering(985) 00:39:59.537 fused_ordering(986) 00:39:59.537 fused_ordering(987) 00:39:59.537 fused_ordering(988) 00:39:59.537 fused_ordering(989) 00:39:59.537 fused_ordering(990) 00:39:59.537 fused_ordering(991) 00:39:59.537 fused_ordering(992) 00:39:59.537 fused_ordering(993) 00:39:59.537 fused_ordering(994) 00:39:59.537 fused_ordering(995) 00:39:59.537 fused_ordering(996) 00:39:59.537 fused_ordering(997) 00:39:59.537 fused_ordering(998) 00:39:59.537 fused_ordering(999) 00:39:59.537 fused_ordering(1000) 00:39:59.537 fused_ordering(1001) 00:39:59.537 fused_ordering(1002) 00:39:59.537 fused_ordering(1003) 00:39:59.538 fused_ordering(1004) 00:39:59.538 fused_ordering(1005) 00:39:59.538 fused_ordering(1006) 00:39:59.538 fused_ordering(1007) 00:39:59.538 fused_ordering(1008) 00:39:59.538 fused_ordering(1009) 00:39:59.538 fused_ordering(1010) 00:39:59.538 fused_ordering(1011) 00:39:59.538 fused_ordering(1012) 00:39:59.538 fused_ordering(1013) 00:39:59.538 fused_ordering(1014) 00:39:59.538 fused_ordering(1015) 00:39:59.538 fused_ordering(1016) 00:39:59.538 fused_ordering(1017) 00:39:59.538 fused_ordering(1018) 00:39:59.538 fused_ordering(1019) 00:39:59.538 fused_ordering(1020) 00:39:59.538 fused_ordering(1021) 00:39:59.538 fused_ordering(1022) 00:39:59.538 fused_ordering(1023) 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.538 rmmod nvme_tcp 00:39:59.538 rmmod nvme_fabrics 00:39:59.538 rmmod nvme_keyring 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 615125 ']' 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 615125 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 615125 ']' 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 615125 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615125 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615125' 00:39:59.538 killing process with pid 615125 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 615125 00:39:59.538 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 615125 00:39:59.795 05:34:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:59.795 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.333 00:40:02.333 real 0m7.543s 00:40:02.333 user 0m5.034s 00:40:02.333 sys 0m3.152s 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:40:02.333 ************************************ 00:40:02.333 END TEST nvmf_fused_ordering 00:40:02.333 ************************************ 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:40:02.333 ************************************ 00:40:02.333 START TEST nvmf_ns_masking 00:40:02.333 ************************************ 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:40:02.333 * Looking for test storage... 00:40:02.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.333 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.334 --rc genhtml_branch_coverage=1 00:40:02.334 --rc genhtml_function_coverage=1 00:40:02.334 --rc genhtml_legend=1 00:40:02.334 --rc geninfo_all_blocks=1 00:40:02.334 --rc geninfo_unexecuted_blocks=1 00:40:02.334 00:40:02.334 ' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.334 --rc genhtml_branch_coverage=1 00:40:02.334 --rc genhtml_function_coverage=1 00:40:02.334 --rc genhtml_legend=1 00:40:02.334 --rc geninfo_all_blocks=1 00:40:02.334 --rc geninfo_unexecuted_blocks=1 00:40:02.334 00:40:02.334 ' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.334 --rc genhtml_branch_coverage=1 00:40:02.334 --rc genhtml_function_coverage=1 00:40:02.334 --rc genhtml_legend=1 00:40:02.334 --rc geninfo_all_blocks=1 00:40:02.334 --rc geninfo_unexecuted_blocks=1 00:40:02.334 00:40:02.334 ' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.334 --rc genhtml_branch_coverage=1 00:40:02.334 --rc genhtml_function_coverage=1 00:40:02.334 --rc genhtml_legend=1 00:40:02.334 --rc geninfo_all_blocks=1 00:40:02.334 --rc geninfo_unexecuted_blocks=1 00:40:02.334 00:40:02.334 ' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:02.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=09b63dd5-eadc-4ca8-a21f-6f6b44de81ba 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=70175ece-7ef3-4b55-96f2-bc8d7d4ad24e 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:40:02.334 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8fa9f8e0-0655-49a0-a69b-bcbaab26a235 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:40:02.335 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:04.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:04.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:04.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:04.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:04.238 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:04.239 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.497 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.497 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.497 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:04.497 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:04.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:40:04.497 00:40:04.497 --- 10.0.0.2 ping statistics --- 00:40:04.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.497 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:40:04.497 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:40:04.498 00:40:04.498 --- 10.0.0.1 ping statistics --- 00:40:04.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.498 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=617474 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 617474 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 617474 ']' 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:04.498 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:04.498 [2024-12-09 05:34:58.574597] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:40:04.498 [2024-12-09 05:34:58.574670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.498 [2024-12-09 05:34:58.650493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.498 [2024-12-09 05:34:58.708207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.498 [2024-12-09 05:34:58.708300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.498 [2024-12-09 05:34:58.708316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.498 [2024-12-09 05:34:58.708328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.498 [2024-12-09 05:34:58.708338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.498 [2024-12-09 05:34:58.709005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.756 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:05.014 [2024-12-09 05:34:59.163832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.014 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:40:05.014 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:40:05.014 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:40:05.272 Malloc1 00:40:05.272 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:40:05.838 Malloc2 00:40:05.838 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:05.838 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:40:06.095 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:06.353 [2024-12-09 05:35:00.562692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8fa9f8e0-0655-49a0-a69b-bcbaab26a235 -a 10.0.0.2 -s 4420 -i 4 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:06.611 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:40:08.533 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:08.790 [ 0]:0x1 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f18aa60a4e32405ea48240bb024a769a 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f18aa60a4e32405ea48240bb024a769a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:08.790 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:09.048 [ 0]:0x1 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f18aa60a4e32405ea48240bb024a769a 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f18aa60a4e32405ea48240bb024a769a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:09.048 [ 1]:0x2 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:40:09.048 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:09.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:09.306 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.563 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:40:09.819 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:40:09.819 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8fa9f8e0-0655-49a0-a69b-bcbaab26a235 -a 10.0.0.2 -s 4420 -i 4 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:40:10.076 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:11.973 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:12.231 [ 0]:0x2 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:12.231 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:12.490 [ 0]:0x1 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f18aa60a4e32405ea48240bb024a769a 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f18aa60a4e32405ea48240bb024a769a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:12.490 [ 1]:0x2 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:12.490 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:12.749 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:13.008 [ 0]:0x2 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:13.008 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:13.008 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:13.008 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:13.008 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:40:13.008 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:13.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:13.008 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:40:13.266 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:40:13.266 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8fa9f8e0-0655-49a0-a69b-bcbaab26a235 -a 10.0.0.2 -s 4420 -i 4 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:40:13.525 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:40:15.424 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:15.424 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:15.424 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:15.424 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:40:15.424 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:15.425 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:40:15.425 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:40:15.425 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:15.683 [ 0]:0x1 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f18aa60a4e32405ea48240bb024a769a 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f18aa60a4e32405ea48240bb024a769a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:15.683 [ 1]:0x2 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:15.683 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:15.941 [ 0]:0x2 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:15.941 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:16.200 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:40:16.458 [2024-12-09 05:35:10.440192] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:40:16.458 request: 00:40:16.458 { 00:40:16.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.458 "nsid": 2, 00:40:16.458 "host": "nqn.2016-06.io.spdk:host1", 00:40:16.458 "method": "nvmf_ns_remove_host", 00:40:16.458 "req_id": 1 00:40:16.458 } 00:40:16.458 Got JSON-RPC error response 00:40:16.458 response: 00:40:16.458 { 00:40:16.458 "code": -32602, 00:40:16.458 "message": "Invalid parameters" 00:40:16.458 } 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:40:16.458 [ 0]:0x2 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3aa350074bf74e9f84c58e2dc456d16b 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3aa350074bf74e9f84c58e2dc456d16b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:16.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=618972 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 618972 /var/tmp/host.sock 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 618972 ']' 00:40:16.458 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:40:16.459 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:16.459 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:40:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:40:16.459 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:16.459 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:16.459 [2024-12-09 05:35:10.669740] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:40:16.459 [2024-12-09 05:35:10.669818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618972 ] 00:40:16.717 [2024-12-09 05:35:10.737316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.717 [2024-12-09 05:35:10.795643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.975 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.975 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:40:16.975 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.233 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:17.490 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 09b63dd5-eadc-4ca8-a21f-6f6b44de81ba 00:40:17.490 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:40:17.490 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09B63DD5EADC4CA8A21F6F6B44DE81BA -i 00:40:17.748 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 70175ece-7ef3-4b55-96f2-bc8d7d4ad24e 00:40:17.748 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:40:17.748 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 70175ECE7EF34B5596F2BC8D7D4AD24E -i 00:40:18.005 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:40:18.263 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:40:18.520 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:40:18.520 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:40:19.085 nvme0n1 00:40:19.085 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:40:19.085 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:40:19.344 nvme1n2 00:40:19.344 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:40:19.344 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:40:19.344 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:40:19.344 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:40:19.344 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:40:19.601 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:40:19.601 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:40:19.601 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:40:19.601 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:40:19.858 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 09b63dd5-eadc-4ca8-a21f-6f6b44de81ba == \0\9\b\6\3\d\d\5\-\e\a\d\c\-\4\c\a\8\-\a\2\1\f\-\6\f\6\b\4\4\d\e\8\1\b\a ]] 00:40:19.858 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:40:19.859 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:40:19.859 05:35:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:40:20.116 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 70175ece-7ef3-4b55-96f2-bc8d7d4ad24e == \7\0\1\7\5\e\c\e\-\7\e\f\3\-\4\b\5\5\-\9\6\f\2\-\b\c\8\d\7\d\4\a\d\2\4\e ]] 00:40:20.116 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:20.373 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 09b63dd5-eadc-4ca8-a21f-6f6b44de81ba 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09B63DD5EADC4CA8A21F6F6B44DE81BA 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09B63DD5EADC4CA8A21F6F6B44DE81BA 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:20.630 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09B63DD5EADC4CA8A21F6F6B44DE81BA 00:40:20.888 [2024-12-09 05:35:15.033264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:40:20.888 [2024-12-09 05:35:15.033328] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:40:20.888 [2024-12-09 05:35:15.033358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.888 request: 00:40:20.888 { 00:40:20.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:20.888 "namespace": { 00:40:20.888 "bdev_name": "invalid", 00:40:20.888 "nsid": 1, 00:40:20.888 "nguid": "09B63DD5EADC4CA8A21F6F6B44DE81BA", 00:40:20.888 "no_auto_visible": false, 00:40:20.888 "hide_metadata": false 00:40:20.888 }, 00:40:20.888 "method": "nvmf_subsystem_add_ns", 00:40:20.888 "req_id": 1 00:40:20.888 } 00:40:20.888 Got JSON-RPC error response 00:40:20.888 response: 00:40:20.888 { 00:40:20.888 "code": -32602, 00:40:20.888 "message": "Invalid parameters" 00:40:20.888 } 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 09b63dd5-eadc-4ca8-a21f-6f6b44de81ba 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:40:20.888 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09B63DD5EADC4CA8A21F6F6B44DE81BA -i 00:40:21.144 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 618972 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 618972 ']' 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 618972 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618972 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618972' 00:40:23.665 killing process with pid 618972 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 618972 00:40:23.665 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 618972 00:40:23.923 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:24.181 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:40:24.181 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:40:24.181 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:24.181 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:40:24.181 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:24.182 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:40:24.182 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:24.182 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:24.182 rmmod nvme_tcp 00:40:24.440 rmmod nvme_fabrics 00:40:24.440 rmmod nvme_keyring 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 617474 ']' 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 617474 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 617474 ']' 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 617474 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617474 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617474' 00:40:24.440 killing process with pid 617474 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 617474 00:40:24.440 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 617474 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:24.706 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.248 00:40:27.248 real 0m24.757s 00:40:27.248 user 0m35.573s 00:40:27.248 sys 0m4.814s 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:40:27.248 ************************************ 00:40:27.248 END TEST nvmf_ns_masking 00:40:27.248 ************************************ 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:40:27.248 ************************************ 00:40:27.248 START TEST nvmf_nvme_cli 00:40:27.248 ************************************ 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:40:27.248 * Looking for test storage... 00:40:27.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:40:27.248 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:27.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.248 --rc genhtml_branch_coverage=1 00:40:27.248 --rc genhtml_function_coverage=1 00:40:27.248 --rc genhtml_legend=1 00:40:27.248 --rc geninfo_all_blocks=1 00:40:27.248 --rc geninfo_unexecuted_blocks=1 00:40:27.248 00:40:27.248 ' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:27.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.248 --rc genhtml_branch_coverage=1 00:40:27.248 --rc genhtml_function_coverage=1 00:40:27.248 --rc genhtml_legend=1 00:40:27.248 --rc geninfo_all_blocks=1 00:40:27.248 --rc geninfo_unexecuted_blocks=1 00:40:27.248 00:40:27.248 ' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:27.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.248 --rc genhtml_branch_coverage=1 00:40:27.248 --rc genhtml_function_coverage=1 00:40:27.248 --rc genhtml_legend=1 00:40:27.248 --rc geninfo_all_blocks=1 00:40:27.248 --rc geninfo_unexecuted_blocks=1 00:40:27.248 00:40:27.248 ' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:27.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.248 --rc genhtml_branch_coverage=1 00:40:27.248 --rc genhtml_function_coverage=1 00:40:27.248 --rc genhtml_legend=1 00:40:27.248 --rc geninfo_all_blocks=1 00:40:27.248 --rc geninfo_unexecuted_blocks=1 00:40:27.248 00:40:27.248 ' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.248 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:40:27.249 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:29.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:29.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:29.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:29.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:29.205 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:29.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:29.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:40:29.206 00:40:29.206 --- 10.0.0.2 ping statistics --- 00:40:29.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.206 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:29.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:29.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:40:29.206 00:40:29.206 --- 10.0.0.1 ping statistics --- 00:40:29.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.206 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=621960 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 621960 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 621960 ']' 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:29.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:29.206 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.206 [2024-12-09 05:35:23.356104] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:40:29.206 [2024-12-09 05:35:23.356211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:29.481 [2024-12-09 05:35:23.437602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:29.481 [2024-12-09 05:35:23.499730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:29.481 [2024-12-09 05:35:23.499796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:29.481 [2024-12-09 05:35:23.499825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:29.481 [2024-12-09 05:35:23.499837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:29.481 [2024-12-09 05:35:23.499846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:29.481 [2024-12-09 05:35:23.501497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.481 [2024-12-09 05:35:23.501556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:29.481 [2024-12-09 05:35:23.501560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.481 [2024-12-09 05:35:23.501530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 [2024-12-09 05:35:23.652244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 Malloc0 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.481 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 Malloc1 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 [2024-12-09 05:35:23.747894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.766 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:40:29.766 00:40:29.766 Discovery Log Number of Records 2, Generation counter 2 00:40:29.766 =====Discovery Log Entry 0====== 00:40:29.766 trtype: tcp 00:40:29.766 adrfam: ipv4 00:40:29.766 subtype: current discovery subsystem 00:40:29.766 treq: not required 00:40:29.766 portid: 0 00:40:29.766 trsvcid: 4420 00:40:29.766 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:29.766 traddr: 10.0.0.2 00:40:29.767 eflags: explicit discovery connections, duplicate discovery information 00:40:29.767 sectype: none 00:40:29.767 =====Discovery Log Entry 1====== 00:40:29.767 trtype: tcp 00:40:29.767 adrfam: ipv4 00:40:29.767 subtype: nvme subsystem 00:40:29.767 treq: not required 00:40:29.767 portid: 0 00:40:29.767 trsvcid: 4420 00:40:29.767 subnqn: nqn.2016-06.io.spdk:cnode1 00:40:29.767 traddr: 10.0.0.2 00:40:29.767 eflags: none 00:40:29.767 sectype: none 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:40:29.767 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:40:30.744 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:40:32.647 /dev/nvme0n2 ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:32.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:32.647 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:32.648 rmmod nvme_tcp 00:40:32.648 rmmod nvme_fabrics 00:40:32.648 rmmod nvme_keyring 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 621960 ']' 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 621960 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 621960 ']' 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 621960 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:32.648 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621960 00:40:32.906 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:32.906 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:32.906 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621960' 00:40:32.906 killing process with pid 621960 00:40:32.906 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 621960 00:40:32.906 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 621960 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.165 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.166 05:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:35.072 00:40:35.072 real 0m8.339s 00:40:35.072 user 0m15.242s 00:40:35.072 sys 0m2.295s 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:40:35.072 ************************************ 00:40:35.072 END TEST nvmf_nvme_cli 00:40:35.072 ************************************ 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.072 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:40:35.331 ************************************ 00:40:35.331 START TEST nvmf_vfio_user 00:40:35.331 ************************************ 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:40:35.331 * Looking for test storage... 00:40:35.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:40:35.331 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:35.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.332 --rc genhtml_branch_coverage=1 00:40:35.332 --rc genhtml_function_coverage=1 00:40:35.332 --rc genhtml_legend=1 00:40:35.332 --rc geninfo_all_blocks=1 00:40:35.332 --rc geninfo_unexecuted_blocks=1 00:40:35.332 00:40:35.332 ' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:35.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.332 --rc genhtml_branch_coverage=1 00:40:35.332 --rc genhtml_function_coverage=1 00:40:35.332 --rc genhtml_legend=1 00:40:35.332 --rc geninfo_all_blocks=1 00:40:35.332 --rc geninfo_unexecuted_blocks=1 00:40:35.332 00:40:35.332 ' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:35.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.332 --rc genhtml_branch_coverage=1 00:40:35.332 --rc genhtml_function_coverage=1 00:40:35.332 --rc genhtml_legend=1 00:40:35.332 --rc geninfo_all_blocks=1 00:40:35.332 --rc geninfo_unexecuted_blocks=1 00:40:35.332 00:40:35.332 ' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:35.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.332 --rc genhtml_branch_coverage=1 00:40:35.332 --rc genhtml_function_coverage=1 00:40:35.332 --rc genhtml_legend=1 00:40:35.332 --rc geninfo_all_blocks=1 00:40:35.332 --rc geninfo_unexecuted_blocks=1 00:40:35.332 00:40:35.332 ' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:35.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=622820 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 622820' 00:40:35.332 Process pid: 622820 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:40:35.332 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 622820 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 622820 ']' 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.333 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:40:35.333 [2024-12-09 05:35:29.513884] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:40:35.333 [2024-12-09 05:35:29.513967] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.591 [2024-12-09 05:35:29.584100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:35.591 [2024-12-09 05:35:29.645132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.591 [2024-12-09 05:35:29.645192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.591 [2024-12-09 05:35:29.645206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.591 [2024-12-09 05:35:29.645217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.591 [2024-12-09 05:35:29.645227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.591 [2024-12-09 05:35:29.646909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:35.591 [2024-12-09 05:35:29.646975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:35.591 [2024-12-09 05:35:29.647041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:35.591 [2024-12-09 05:35:29.647044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.591 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:35.591 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:40:35.591 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:40:36.963 05:35:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:40:36.963 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:40:36.963 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:40:36.963 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:40:36.963 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:40:36.963 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:40:37.220 Malloc1 00:40:37.220 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:40:37.476 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:40:37.733 05:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:40:37.991 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:40:37.991 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:40:37.991 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:40:38.250 Malloc2 00:40:38.250 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:40:38.508 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:40:39.074 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:40:39.074 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:40:39.074 [2024-12-09 05:35:33.283497] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:40:39.074 [2024-12-09 05:35:33.283539] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623246 ] 00:40:39.333 [2024-12-09 05:35:33.333895] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:40:39.333 [2024-12-09 05:35:33.342761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:40:39.333 [2024-12-09 05:35:33.342793] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3ed2d18000 00:40:39.333 [2024-12-09 05:35:33.343752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.344752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.345752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.346763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.347762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.348767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.349771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.350782] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:40:39.333 [2024-12-09 05:35:33.351786] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:40:39.333 [2024-12-09 05:35:33.351806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3ed2d0d000 00:40:39.333 [2024-12-09 05:35:33.352985] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:40:39.333 [2024-12-09 05:35:33.367569] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:40:39.333 [2024-12-09 05:35:33.367617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:40:39.333 [2024-12-09 05:35:33.369908] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:40:39.333 [2024-12-09 05:35:33.369961] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:40:39.333 [2024-12-09 05:35:33.370051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:40:39.333 [2024-12-09 05:35:33.370079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:40:39.333 [2024-12-09 05:35:33.370090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:40:39.333 [2024-12-09 05:35:33.370899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:40:39.333 [2024-12-09 05:35:33.370924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:40:39.334 [2024-12-09 05:35:33.370938] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:40:39.334 [2024-12-09 05:35:33.375297] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:40:39.334 [2024-12-09 05:35:33.375318] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:40:39.334 [2024-12-09 05:35:33.375332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.375931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:40:39.334 [2024-12-09 05:35:33.375948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.376935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:40:39.334 [2024-12-09 05:35:33.376955] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:40:39.334 [2024-12-09 05:35:33.376964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.376975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.377085] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:40:39.334 [2024-12-09 05:35:33.377092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.377100] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:40:39.334 [2024-12-09 05:35:33.377945] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:40:39.334 [2024-12-09 05:35:33.378950] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:40:39.334 [2024-12-09 05:35:33.379955] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:40:39.334 [2024-12-09 05:35:33.380948] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:39.334 [2024-12-09 05:35:33.381064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:40:39.334 [2024-12-09 05:35:33.381965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:40:39.334 [2024-12-09 05:35:33.381983] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:40:39.334 [2024-12-09 05:35:33.381992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:40:39.334 [2024-12-09 05:35:33.382033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382063] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:40:39.334 [2024-12-09 05:35:33.382074] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:40:39.334 [2024-12-09 05:35:33.382080] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.334 [2024-12-09 05:35:33.382097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382167] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:40:39.334 [2024-12-09 05:35:33.382175] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:40:39.334 [2024-12-09 05:35:33.382182] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:40:39.334 [2024-12-09 05:35:33.382190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:40:39.334 [2024-12-09 05:35:33.382197] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:40:39.334 [2024-12-09 05:35:33.382205] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:40:39.334 [2024-12-09 05:35:33.382212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.334 [2024-12-09 05:35:33.382309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.334 [2024-12-09 05:35:33.382321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.334 [2024-12-09 05:35:33.382333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:39.334 [2024-12-09 05:35:33.382346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382405] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:40:39.334 [2024-12-09 05:35:33.382413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382586] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:40:39.334 [2024-12-09 05:35:33.382595] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:40:39.334 [2024-12-09 05:35:33.382601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.334 [2024-12-09 05:35:33.382611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382666] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:40:39.334 [2024-12-09 05:35:33.382681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382707] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:40:39.334 [2024-12-09 05:35:33.382716] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:40:39.334 [2024-12-09 05:35:33.382721] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.334 [2024-12-09 05:35:33.382730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382810] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:40:39.334 [2024-12-09 05:35:33.382818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:40:39.334 [2024-12-09 05:35:33.382824] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.334 [2024-12-09 05:35:33.382833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:40:39.334 [2024-12-09 05:35:33.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:40:39.334 [2024-12-09 05:35:33.382864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:40:39.334 [2024-12-09 05:35:33.382899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:40:39.335 [2024-12-09 05:35:33.382908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:40:39.335 [2024-12-09 05:35:33.382916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:40:39.335 [2024-12-09 05:35:33.382923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:40:39.335 [2024-12-09 05:35:33.382931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:40:39.335 [2024-12-09 05:35:33.382939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:40:39.335 [2024-12-09 05:35:33.382965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.382983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383088] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:40:39.335 [2024-12-09 05:35:33.383098] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:40:39.335 [2024-12-09 05:35:33.383108] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:40:39.335 [2024-12-09 05:35:33.383114] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:40:39.335 [2024-12-09 05:35:33.383119] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:40:39.335 [2024-12-09 05:35:33.383128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:40:39.335 [2024-12-09 05:35:33.383140] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:40:39.335 [2024-12-09 05:35:33.383148] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:40:39.335 [2024-12-09 05:35:33.383153] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.335 [2024-12-09 05:35:33.383162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383173] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:40:39.335 [2024-12-09 05:35:33.383181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:40:39.335 [2024-12-09 05:35:33.383186] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.335 [2024-12-09 05:35:33.383195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383207] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:40:39.335 [2024-12-09 05:35:33.383215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:40:39.335 [2024-12-09 05:35:33.383220] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:40:39.335 [2024-12-09 05:35:33.383229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:40:39.335 [2024-12-09 05:35:33.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:40:39.335 [2024-12-09 05:35:33.383323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:40:39.335 ===================================================== 00:40:39.335 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:40:39.335 ===================================================== 00:40:39.335 Controller Capabilities/Features 00:40:39.335 ================================ 00:40:39.335 Vendor ID: 4e58 00:40:39.335 Subsystem Vendor ID: 4e58 00:40:39.335 Serial Number: SPDK1 00:40:39.335 Model Number: SPDK bdev Controller 00:40:39.335 Firmware Version: 25.01 00:40:39.335 Recommended Arb Burst: 6 00:40:39.335 IEEE OUI Identifier: 8d 6b 50 00:40:39.335 Multi-path I/O 00:40:39.335 May have multiple subsystem ports: Yes 00:40:39.335 May have multiple controllers: Yes 00:40:39.335 Associated with SR-IOV VF: No 00:40:39.335 Max Data Transfer Size: 131072 00:40:39.335 Max Number of Namespaces: 32 00:40:39.335 Max Number of I/O Queues: 127 00:40:39.335 NVMe Specification Version (VS): 1.3 00:40:39.335 NVMe Specification Version (Identify): 1.3 00:40:39.335 Maximum Queue Entries: 256 00:40:39.335 Contiguous Queues Required: Yes 00:40:39.335 Arbitration Mechanisms Supported 00:40:39.335 Weighted Round Robin: Not Supported 00:40:39.335 Vendor Specific: Not Supported 00:40:39.335 Reset Timeout: 15000 ms 00:40:39.335 Doorbell Stride: 4 bytes 00:40:39.335 NVM Subsystem Reset: Not Supported 00:40:39.335 Command Sets Supported 00:40:39.335 NVM Command Set: Supported 00:40:39.335 Boot Partition: Not Supported 00:40:39.335 Memory Page Size Minimum: 4096 bytes 00:40:39.335 Memory Page Size Maximum: 4096 bytes 00:40:39.335 Persistent Memory Region: Not Supported 00:40:39.335 Optional Asynchronous Events Supported 00:40:39.335 Namespace Attribute Notices: Supported 00:40:39.335 Firmware Activation Notices: Not Supported 00:40:39.335 ANA Change Notices: Not Supported 00:40:39.335 PLE Aggregate Log Change Notices: Not Supported 00:40:39.335 LBA Status Info Alert Notices: Not Supported 00:40:39.335 EGE Aggregate Log Change Notices: Not Supported 00:40:39.335 Normal NVM Subsystem Shutdown event: Not Supported 00:40:39.335 Zone Descriptor Change Notices: Not Supported 00:40:39.335 Discovery Log Change Notices: Not Supported 00:40:39.335 Controller Attributes 00:40:39.335 128-bit Host Identifier: Supported 00:40:39.335 Non-Operational Permissive Mode: Not Supported 00:40:39.335 NVM Sets: Not Supported 00:40:39.335 Read Recovery Levels: Not Supported 00:40:39.335 Endurance Groups: Not Supported 00:40:39.335 Predictable Latency Mode: Not Supported 00:40:39.335 Traffic Based Keep ALive: Not Supported 00:40:39.335 Namespace Granularity: Not Supported 00:40:39.335 SQ Associations: Not Supported 00:40:39.335 UUID List: Not Supported 00:40:39.335 Multi-Domain Subsystem: Not Supported 00:40:39.335 Fixed Capacity Management: Not Supported 00:40:39.335 Variable Capacity Management: Not Supported 00:40:39.335 Delete Endurance Group: Not Supported 00:40:39.335 Delete NVM Set: Not Supported 00:40:39.335 Extended LBA Formats Supported: Not Supported 00:40:39.335 Flexible Data Placement Supported: Not Supported 00:40:39.335 00:40:39.335 Controller Memory Buffer Support 00:40:39.335 ================================ 00:40:39.335 Supported: No 00:40:39.335 00:40:39.335 Persistent Memory Region Support 00:40:39.335 ================================ 00:40:39.335 Supported: No 00:40:39.335 00:40:39.335 Admin Command Set Attributes 00:40:39.335 ============================ 00:40:39.335 Security Send/Receive: Not Supported 00:40:39.335 Format NVM: Not Supported 00:40:39.335 Firmware Activate/Download: Not Supported 00:40:39.335 Namespace Management: Not Supported 00:40:39.335 Device Self-Test: Not Supported 00:40:39.335 Directives: Not Supported 00:40:39.335 NVMe-MI: Not Supported 00:40:39.335 Virtualization Management: Not Supported 00:40:39.335 Doorbell Buffer Config: Not Supported 00:40:39.335 Get LBA Status Capability: Not Supported 00:40:39.335 Command & Feature Lockdown Capability: Not Supported 00:40:39.335 Abort Command Limit: 4 00:40:39.335 Async Event Request Limit: 4 00:40:39.335 Number of Firmware Slots: N/A 00:40:39.335 Firmware Slot 1 Read-Only: N/A 00:40:39.335 Firmware Activation Without Reset: N/A 00:40:39.335 Multiple Update Detection Support: N/A 00:40:39.335 Firmware Update Granularity: No Information Provided 00:40:39.335 Per-Namespace SMART Log: No 00:40:39.335 Asymmetric Namespace Access Log Page: Not Supported 00:40:39.335 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:40:39.335 Command Effects Log Page: Supported 00:40:39.335 Get Log Page Extended Data: Supported 00:40:39.335 Telemetry Log Pages: Not Supported 00:40:39.335 Persistent Event Log Pages: Not Supported 00:40:39.335 Supported Log Pages Log Page: May Support 00:40:39.335 Commands Supported & Effects Log Page: Not Supported 00:40:39.335 Feature Identifiers & Effects Log Page:May Support 00:40:39.335 NVMe-MI Commands & Effects Log Page: May Support 00:40:39.335 Data Area 4 for Telemetry Log: Not Supported 00:40:39.335 Error Log Page Entries Supported: 128 00:40:39.335 Keep Alive: Supported 00:40:39.335 Keep Alive Granularity: 10000 ms 00:40:39.335 00:40:39.335 NVM Command Set Attributes 00:40:39.335 ========================== 00:40:39.335 Submission Queue Entry Size 00:40:39.335 Max: 64 00:40:39.336 Min: 64 00:40:39.336 Completion Queue Entry Size 00:40:39.336 Max: 16 00:40:39.336 Min: 16 00:40:39.336 Number of Namespaces: 32 00:40:39.336 Compare Command: Supported 00:40:39.336 Write Uncorrectable Command: Not Supported 00:40:39.336 Dataset Management Command: Supported 00:40:39.336 Write Zeroes Command: Supported 00:40:39.336 Set Features Save Field: Not Supported 00:40:39.336 Reservations: Not Supported 00:40:39.336 Timestamp: Not Supported 00:40:39.336 Copy: Supported 00:40:39.336 Volatile Write Cache: Present 00:40:39.336 Atomic Write Unit (Normal): 1 00:40:39.336 Atomic Write Unit (PFail): 1 00:40:39.336 Atomic Compare & Write Unit: 1 00:40:39.336 Fused Compare & Write: Supported 00:40:39.336 Scatter-Gather List 00:40:39.336 SGL Command Set: Supported (Dword aligned) 00:40:39.336 SGL Keyed: Not Supported 00:40:39.336 SGL Bit Bucket Descriptor: Not Supported 00:40:39.336 SGL Metadata Pointer: Not Supported 00:40:39.336 Oversized SGL: Not Supported 00:40:39.336 SGL Metadata Address: Not Supported 00:40:39.336 SGL Offset: Not Supported 00:40:39.336 Transport SGL Data Block: Not Supported 00:40:39.336 Replay Protected Memory Block: Not Supported 00:40:39.336 00:40:39.336 Firmware Slot Information 00:40:39.336 ========================= 00:40:39.336 Active slot: 1 00:40:39.336 Slot 1 Firmware Revision: 25.01 00:40:39.336 00:40:39.336 00:40:39.336 Commands Supported and Effects 00:40:39.336 ============================== 00:40:39.336 Admin Commands 00:40:39.336 -------------- 00:40:39.336 Get Log Page (02h): Supported 00:40:39.336 Identify (06h): Supported 00:40:39.336 Abort (08h): Supported 00:40:39.336 Set Features (09h): Supported 00:40:39.336 Get Features (0Ah): Supported 00:40:39.336 Asynchronous Event Request (0Ch): Supported 00:40:39.336 Keep Alive (18h): Supported 00:40:39.336 I/O Commands 00:40:39.336 ------------ 00:40:39.336 Flush (00h): Supported LBA-Change 00:40:39.336 Write (01h): Supported LBA-Change 00:40:39.336 Read (02h): Supported 00:40:39.336 Compare (05h): Supported 00:40:39.336 Write Zeroes (08h): Supported LBA-Change 00:40:39.336 Dataset Management (09h): Supported LBA-Change 00:40:39.336 Copy (19h): Supported LBA-Change 00:40:39.336 00:40:39.336 Error Log 00:40:39.336 ========= 00:40:39.336 00:40:39.336 Arbitration 00:40:39.336 =========== 00:40:39.336 Arbitration Burst: 1 00:40:39.336 00:40:39.336 Power Management 00:40:39.336 ================ 00:40:39.336 Number of Power States: 1 00:40:39.336 Current Power State: Power State #0 00:40:39.336 Power State #0: 00:40:39.336 Max Power: 0.00 W 00:40:39.336 Non-Operational State: Operational 00:40:39.336 Entry Latency: Not Reported 00:40:39.336 Exit Latency: Not Reported 00:40:39.336 Relative Read Throughput: 0 00:40:39.336 Relative Read Latency: 0 00:40:39.336 Relative Write Throughput: 0 00:40:39.336 Relative Write Latency: 0 00:40:39.336 Idle Power: Not Reported 00:40:39.336 Active Power: Not Reported 00:40:39.336 Non-Operational Permissive Mode: Not Supported 00:40:39.336 00:40:39.336 Health Information 00:40:39.336 ================== 00:40:39.336 Critical Warnings: 00:40:39.336 Available Spare Space: OK 00:40:39.336 Temperature: OK 00:40:39.336 Device Reliability: OK 00:40:39.336 Read Only: No 00:40:39.336 Volatile Memory Backup: OK 00:40:39.336 Current Temperature: 0 Kelvin (-273 Celsius) 00:40:39.336 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:40:39.336 Available Spare: 0% 00:40:39.336 Available Sp[2024-12-09 05:35:33.383456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:40:39.336 [2024-12-09 05:35:33.383473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:40:39.336 [2024-12-09 05:35:33.383520] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:40:39.336 [2024-12-09 05:35:33.383537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.336 [2024-12-09 05:35:33.383548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.336 [2024-12-09 05:35:33.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.336 [2024-12-09 05:35:33.383567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:39.336 [2024-12-09 05:35:33.383974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:40:39.336 [2024-12-09 05:35:33.383998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:40:39.336 [2024-12-09 05:35:33.384969] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:39.336 [2024-12-09 05:35:33.385055] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:40:39.336 [2024-12-09 05:35:33.385069] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:40:39.336 [2024-12-09 05:35:33.385976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:40:39.336 [2024-12-09 05:35:33.385998] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:40:39.336 [2024-12-09 05:35:33.386051] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:40:39.336 [2024-12-09 05:35:33.389283] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:40:39.336 are Threshold: 0% 00:40:39.336 Life Percentage Used: 0% 00:40:39.336 Data Units Read: 0 00:40:39.336 Data Units Written: 0 00:40:39.336 Host Read Commands: 0 00:40:39.336 Host Write Commands: 0 00:40:39.336 Controller Busy Time: 0 minutes 00:40:39.336 Power Cycles: 0 00:40:39.336 Power On Hours: 0 hours 00:40:39.336 Unsafe Shutdowns: 0 00:40:39.336 Unrecoverable Media Errors: 0 00:40:39.336 Lifetime Error Log Entries: 0 00:40:39.336 Warning Temperature Time: 0 minutes 00:40:39.336 Critical Temperature Time: 0 minutes 00:40:39.336 00:40:39.336 Number of Queues 00:40:39.336 ================ 00:40:39.336 Number of I/O Submission Queues: 127 00:40:39.336 Number of I/O Completion Queues: 127 00:40:39.336 00:40:39.336 Active Namespaces 00:40:39.336 ================= 00:40:39.336 Namespace ID:1 00:40:39.336 Error Recovery Timeout: Unlimited 00:40:39.336 Command Set Identifier: NVM (00h) 00:40:39.336 Deallocate: Supported 00:40:39.336 Deallocated/Unwritten Error: Not Supported 00:40:39.336 Deallocated Read Value: Unknown 00:40:39.336 Deallocate in Write Zeroes: Not Supported 00:40:39.336 Deallocated Guard Field: 0xFFFF 00:40:39.336 Flush: Supported 00:40:39.336 Reservation: Supported 00:40:39.336 Namespace Sharing Capabilities: Multiple Controllers 00:40:39.336 Size (in LBAs): 131072 (0GiB) 00:40:39.336 Capacity (in LBAs): 131072 (0GiB) 00:40:39.336 Utilization (in LBAs): 131072 (0GiB) 00:40:39.336 NGUID: A753EF4CB1A846C39CC015D494AE3BF0 00:40:39.336 UUID: a753ef4c-b1a8-46c3-9cc0-15d494ae3bf0 00:40:39.336 Thin Provisioning: Not Supported 00:40:39.336 Per-NS Atomic Units: Yes 00:40:39.336 Atomic Boundary Size (Normal): 0 00:40:39.336 Atomic Boundary Size (PFail): 0 00:40:39.336 Atomic Boundary Offset: 0 00:40:39.336 Maximum Single Source Range Length: 65535 00:40:39.336 Maximum Copy Length: 65535 00:40:39.336 Maximum Source Range Count: 1 00:40:39.336 NGUID/EUI64 Never Reused: No 00:40:39.336 Namespace Write Protected: No 00:40:39.336 Number of LBA Formats: 1 00:40:39.336 Current LBA Format: LBA Format #00 00:40:39.336 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:39.336 00:40:39.336 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:40:39.594 [2024-12-09 05:35:33.718169] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:44.855 Initializing NVMe Controllers 00:40:44.856 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:40:44.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:40:44.856 Initialization complete. Launching workers. 00:40:44.856 ======================================================== 00:40:44.856 Latency(us) 00:40:44.856 Device Information : IOPS MiB/s Average min max 00:40:44.856 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30896.80 120.69 4142.87 1213.13 8201.52 00:40:44.856 ======================================================== 00:40:44.856 Total : 30896.80 120.69 4142.87 1213.13 8201.52 00:40:44.856 00:40:44.856 [2024-12-09 05:35:38.741053] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:44.856 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:40:45.113 [2024-12-09 05:35:39.084505] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:50.372 Initializing NVMe Controllers 00:40:50.372 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:40:50.372 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:40:50.372 Initialization complete. Launching workers. 00:40:50.372 ======================================================== 00:40:50.372 Latency(us) 00:40:50.372 Device Information : IOPS MiB/s Average min max 00:40:50.372 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.77 62.67 7987.64 6029.45 15860.11 00:40:50.372 ======================================================== 00:40:50.372 Total : 16044.77 62.67 7987.64 6029.45 15860.11 00:40:50.372 00:40:50.372 [2024-12-09 05:35:44.123514] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:50.372 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:40:50.372 [2024-12-09 05:35:44.438864] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:55.633 [2024-12-09 05:35:49.511710] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:55.633 Initializing NVMe Controllers 00:40:55.633 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:40:55.633 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:40:55.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:40:55.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:40:55.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:40:55.633 Initialization complete. Launching workers. 00:40:55.633 Starting thread on core 2 00:40:55.633 Starting thread on core 3 00:40:55.633 Starting thread on core 1 00:40:55.633 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:40:55.903 [2024-12-09 05:35:49.918736] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:59.179 [2024-12-09 05:35:52.979307] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:59.179 Initializing NVMe Controllers 00:40:59.179 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:40:59.179 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:40:59.179 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:40:59.179 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:40:59.179 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:40:59.179 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:40:59.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:40:59.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:40:59.179 Initialization complete. Launching workers. 00:40:59.179 Starting thread on core 1 with urgent priority queue 00:40:59.179 Starting thread on core 2 with urgent priority queue 00:40:59.179 Starting thread on core 3 with urgent priority queue 00:40:59.179 Starting thread on core 0 with urgent priority queue 00:40:59.179 SPDK bdev Controller (SPDK1 ) core 0: 4995.00 IO/s 20.02 secs/100000 ios 00:40:59.179 SPDK bdev Controller (SPDK1 ) core 1: 5051.33 IO/s 19.80 secs/100000 ios 00:40:59.179 SPDK bdev Controller (SPDK1 ) core 2: 4685.33 IO/s 21.34 secs/100000 ios 00:40:59.179 SPDK bdev Controller (SPDK1 ) core 3: 4926.67 IO/s 20.30 secs/100000 ios 00:40:59.179 ======================================================== 00:40:59.179 00:40:59.179 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:40:59.179 [2024-12-09 05:35:53.360871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:40:59.179 Initializing NVMe Controllers 00:40:59.179 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:40:59.179 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:40:59.179 Namespace ID: 1 size: 0GB 00:40:59.179 Initialization complete. 00:40:59.179 INFO: using host memory buffer for IO 00:40:59.179 Hello world! 00:40:59.179 [2024-12-09 05:35:53.395430] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:40:59.436 05:35:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:40:59.694 [2024-12-09 05:35:53.781740] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:41:00.627 Initializing NVMe Controllers 00:41:00.627 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:41:00.627 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:41:00.627 Initialization complete. Launching workers. 00:41:00.627 submit (in ns) avg, min, max = 6830.6, 3515.6, 4014684.4 00:41:00.627 complete (in ns) avg, min, max = 28910.1, 2067.8, 4020672.2 00:41:00.627 00:41:00.627 Submit histogram 00:41:00.627 ================ 00:41:00.627 Range in us Cumulative Count 00:41:00.627 3.508 - 3.532: 0.1333% ( 17) 00:41:00.627 3.532 - 3.556: 0.7682% ( 81) 00:41:00.627 3.556 - 3.579: 2.4849% ( 219) 00:41:00.627 3.579 - 3.603: 6.0751% ( 458) 00:41:00.627 3.603 - 3.627: 12.0875% ( 767) 00:41:00.627 3.627 - 3.650: 20.5064% ( 1074) 00:41:00.627 3.650 - 3.674: 28.4314% ( 1011) 00:41:00.627 3.674 - 3.698: 36.0430% ( 971) 00:41:00.627 3.698 - 3.721: 43.0822% ( 898) 00:41:00.627 3.721 - 3.745: 49.4317% ( 810) 00:41:00.627 3.745 - 3.769: 54.3858% ( 632) 00:41:00.627 3.769 - 3.793: 58.4777% ( 522) 00:41:00.627 3.793 - 3.816: 62.0757% ( 459) 00:41:00.627 3.816 - 3.840: 65.8305% ( 479) 00:41:00.627 3.840 - 3.864: 70.1654% ( 553) 00:41:00.627 3.864 - 3.887: 74.3043% ( 528) 00:41:00.627 3.887 - 3.911: 78.4119% ( 524) 00:41:00.627 3.911 - 3.935: 82.1667% ( 479) 00:41:00.627 3.935 - 3.959: 84.9494% ( 355) 00:41:00.627 3.959 - 3.982: 86.9013% ( 249) 00:41:00.627 3.982 - 4.006: 88.6494% ( 223) 00:41:00.627 4.006 - 4.030: 90.1231% ( 188) 00:41:00.627 4.030 - 4.053: 91.3930% ( 162) 00:41:00.627 4.053 - 4.077: 92.4277% ( 132) 00:41:00.627 4.077 - 4.101: 93.2821% ( 109) 00:41:00.627 4.101 - 4.124: 94.0346% ( 96) 00:41:00.627 4.124 - 4.148: 94.7009% ( 85) 00:41:00.627 4.148 - 4.172: 95.2026% ( 64) 00:41:00.627 4.172 - 4.196: 95.5475% ( 44) 00:41:00.627 4.196 - 4.219: 95.7749% ( 29) 00:41:00.627 4.219 - 4.243: 95.9708% ( 25) 00:41:00.627 4.243 - 4.267: 96.0963% ( 16) 00:41:00.627 4.267 - 4.290: 96.2295% ( 17) 00:41:00.627 4.290 - 4.314: 96.3393% ( 14) 00:41:00.627 4.314 - 4.338: 96.4804% ( 18) 00:41:00.627 4.338 - 4.361: 96.5979% ( 15) 00:41:00.627 4.361 - 4.385: 96.6842% ( 11) 00:41:00.627 4.385 - 4.409: 96.7312% ( 6) 00:41:00.627 4.409 - 4.433: 96.7782% ( 6) 00:41:00.627 4.433 - 4.456: 96.8410% ( 8) 00:41:00.627 4.456 - 4.480: 96.8645% ( 3) 00:41:00.627 4.504 - 4.527: 96.8723% ( 1) 00:41:00.627 4.527 - 4.551: 96.8958% ( 3) 00:41:00.627 4.551 - 4.575: 96.9193% ( 3) 00:41:00.627 4.575 - 4.599: 96.9507% ( 4) 00:41:00.627 4.599 - 4.622: 96.9664% ( 2) 00:41:00.627 4.622 - 4.646: 96.9820% ( 2) 00:41:00.627 4.670 - 4.693: 97.0056% ( 3) 00:41:00.627 4.693 - 4.717: 97.0604% ( 7) 00:41:00.627 4.717 - 4.741: 97.0918% ( 4) 00:41:00.627 4.741 - 4.764: 97.1388% ( 6) 00:41:00.627 4.764 - 4.788: 97.1467% ( 1) 00:41:00.627 4.788 - 4.812: 97.1702% ( 3) 00:41:00.627 4.812 - 4.836: 97.2172% ( 6) 00:41:00.627 4.836 - 4.859: 97.2799% ( 8) 00:41:00.627 4.859 - 4.883: 97.3583% ( 10) 00:41:00.627 4.883 - 4.907: 97.3897% ( 4) 00:41:00.627 4.907 - 4.930: 97.4289% ( 5) 00:41:00.627 4.930 - 4.954: 97.4681% ( 5) 00:41:00.627 4.954 - 4.978: 97.5229% ( 7) 00:41:00.627 4.978 - 5.001: 97.5621% ( 5) 00:41:00.627 5.001 - 5.025: 97.6405% ( 10) 00:41:00.627 5.025 - 5.049: 97.6797% ( 5) 00:41:00.627 5.049 - 5.073: 97.7189% ( 5) 00:41:00.627 5.073 - 5.096: 97.7503% ( 4) 00:41:00.627 5.096 - 5.120: 97.7894% ( 5) 00:41:00.627 5.120 - 5.144: 97.8130% ( 3) 00:41:00.627 5.144 - 5.167: 97.8365% ( 3) 00:41:00.627 5.167 - 5.191: 97.8600% ( 3) 00:41:00.627 5.191 - 5.215: 97.8914% ( 4) 00:41:00.627 5.215 - 5.239: 97.8992% ( 1) 00:41:00.627 5.239 - 5.262: 97.9227% ( 3) 00:41:00.627 5.262 - 5.286: 97.9384% ( 2) 00:41:00.627 5.310 - 5.333: 97.9697% ( 4) 00:41:00.627 5.333 - 5.357: 97.9933% ( 3) 00:41:00.627 5.357 - 5.381: 98.0011% ( 1) 00:41:00.627 5.381 - 5.404: 98.0168% ( 2) 00:41:00.627 5.404 - 5.428: 98.0246% ( 1) 00:41:00.627 5.499 - 5.523: 98.0325% ( 1) 00:41:00.627 5.523 - 5.547: 98.0403% ( 1) 00:41:00.627 5.547 - 5.570: 98.0481% ( 1) 00:41:00.627 5.594 - 5.618: 98.0560% ( 1) 00:41:00.627 5.665 - 5.689: 98.0638% ( 1) 00:41:00.627 5.689 - 5.713: 98.0716% ( 1) 00:41:00.627 5.807 - 5.831: 98.0795% ( 1) 00:41:00.627 5.879 - 5.902: 98.0873% ( 1) 00:41:00.627 6.021 - 6.044: 98.0952% ( 1) 00:41:00.628 6.044 - 6.068: 98.1030% ( 1) 00:41:00.628 6.116 - 6.163: 98.1108% ( 1) 00:41:00.628 6.163 - 6.210: 98.1265% ( 2) 00:41:00.628 6.305 - 6.353: 98.1344% ( 1) 00:41:00.628 6.779 - 6.827: 98.1422% ( 1) 00:41:00.628 6.969 - 7.016: 98.1500% ( 1) 00:41:00.628 7.064 - 7.111: 98.1579% ( 1) 00:41:00.628 7.159 - 7.206: 98.1657% ( 1) 00:41:00.628 7.301 - 7.348: 98.1814% ( 2) 00:41:00.628 7.348 - 7.396: 98.1892% ( 1) 00:41:00.628 7.443 - 7.490: 98.2049% ( 2) 00:41:00.628 7.490 - 7.538: 98.2206% ( 2) 00:41:00.628 7.538 - 7.585: 98.2284% ( 1) 00:41:00.628 7.633 - 7.680: 98.2363% ( 1) 00:41:00.628 7.775 - 7.822: 98.2441% ( 1) 00:41:00.628 7.822 - 7.870: 98.2519% ( 1) 00:41:00.628 7.917 - 7.964: 98.2598% ( 1) 00:41:00.628 8.059 - 8.107: 98.2676% ( 1) 00:41:00.628 8.107 - 8.154: 98.2755% ( 1) 00:41:00.628 8.249 - 8.296: 98.2911% ( 2) 00:41:00.628 8.296 - 8.344: 98.2990% ( 1) 00:41:00.628 8.391 - 8.439: 98.3068% ( 1) 00:41:00.628 8.439 - 8.486: 98.3147% ( 1) 00:41:00.628 8.581 - 8.628: 98.3225% ( 1) 00:41:00.628 8.628 - 8.676: 98.3460% ( 3) 00:41:00.628 8.676 - 8.723: 98.3538% ( 1) 00:41:00.628 8.723 - 8.770: 98.3617% ( 1) 00:41:00.628 9.292 - 9.339: 98.3774% ( 2) 00:41:00.628 9.387 - 9.434: 98.3852% ( 1) 00:41:00.628 9.481 - 9.529: 98.3930% ( 1) 00:41:00.628 9.576 - 9.624: 98.4009% ( 1) 00:41:00.628 9.624 - 9.671: 98.4087% ( 1) 00:41:00.628 9.671 - 9.719: 98.4166% ( 1) 00:41:00.628 9.719 - 9.766: 98.4244% ( 1) 00:41:00.628 9.813 - 9.861: 98.4322% ( 1) 00:41:00.628 9.861 - 9.908: 98.4479% ( 2) 00:41:00.628 10.098 - 10.145: 98.4557% ( 1) 00:41:00.628 10.382 - 10.430: 98.4636% ( 1) 00:41:00.628 10.572 - 10.619: 98.4714% ( 1) 00:41:00.628 10.667 - 10.714: 98.4793% ( 1) 00:41:00.628 10.809 - 10.856: 98.5028% ( 3) 00:41:00.628 10.856 - 10.904: 98.5106% ( 1) 00:41:00.628 10.904 - 10.951: 98.5185% ( 1) 00:41:00.628 10.951 - 10.999: 98.5263% ( 1) 00:41:00.628 11.141 - 11.188: 98.5341% ( 1) 00:41:00.628 11.283 - 11.330: 98.5498% ( 2) 00:41:00.628 11.330 - 11.378: 98.5577% ( 1) 00:41:00.628 11.473 - 11.520: 98.5733% ( 2) 00:41:00.628 11.520 - 11.567: 98.5812% ( 1) 00:41:00.628 11.567 - 11.615: 98.5968% ( 2) 00:41:00.628 11.662 - 11.710: 98.6047% ( 1) 00:41:00.628 11.710 - 11.757: 98.6125% ( 1) 00:41:00.628 11.852 - 11.899: 98.6204% ( 1) 00:41:00.628 11.899 - 11.947: 98.6282% ( 1) 00:41:00.628 11.994 - 12.041: 98.6360% ( 1) 00:41:00.628 12.326 - 12.421: 98.6439% ( 1) 00:41:00.628 12.516 - 12.610: 98.6517% ( 1) 00:41:00.628 12.705 - 12.800: 98.6752% ( 3) 00:41:00.628 12.800 - 12.895: 98.6831% ( 1) 00:41:00.628 13.179 - 13.274: 98.6909% ( 1) 00:41:00.628 13.274 - 13.369: 98.6988% ( 1) 00:41:00.628 13.653 - 13.748: 98.7066% ( 1) 00:41:00.628 13.748 - 13.843: 98.7144% ( 1) 00:41:00.628 13.938 - 14.033: 98.7223% ( 1) 00:41:00.628 14.127 - 14.222: 98.7301% ( 1) 00:41:00.628 14.222 - 14.317: 98.7379% ( 1) 00:41:00.628 14.412 - 14.507: 98.7458% ( 1) 00:41:00.628 15.076 - 15.170: 98.7615% ( 2) 00:41:00.628 15.360 - 15.455: 98.7693% ( 1) 00:41:00.628 15.455 - 15.550: 98.7771% ( 1) 00:41:00.628 15.929 - 16.024: 98.7850% ( 1) 00:41:00.628 16.972 - 17.067: 98.8085% ( 3) 00:41:00.628 17.067 - 17.161: 98.8163% ( 1) 00:41:00.628 17.256 - 17.351: 98.8399% ( 3) 00:41:00.628 17.351 - 17.446: 98.8634% ( 3) 00:41:00.628 17.446 - 17.541: 98.9104% ( 6) 00:41:00.628 17.541 - 17.636: 98.9574% ( 6) 00:41:00.628 17.636 - 17.730: 98.9810% ( 3) 00:41:00.628 17.730 - 17.825: 99.0515% ( 9) 00:41:00.628 17.825 - 17.920: 99.1142% ( 8) 00:41:00.628 17.920 - 18.015: 99.1534% ( 5) 00:41:00.628 18.015 - 18.110: 99.2161% ( 8) 00:41:00.628 18.110 - 18.204: 99.3023% ( 11) 00:41:00.628 18.204 - 18.299: 99.3886% ( 11) 00:41:00.628 18.299 - 18.394: 99.4905% ( 13) 00:41:00.628 18.394 - 18.489: 99.5532% ( 8) 00:41:00.628 18.489 - 18.584: 99.6002% ( 6) 00:41:00.628 18.584 - 18.679: 99.6316% ( 4) 00:41:00.628 18.679 - 18.773: 99.6864% ( 7) 00:41:00.628 18.773 - 18.868: 99.7570% ( 9) 00:41:00.628 18.868 - 18.963: 99.7884% ( 4) 00:41:00.628 18.963 - 19.058: 99.8119% ( 3) 00:41:00.628 19.153 - 19.247: 99.8275% ( 2) 00:41:00.628 19.247 - 19.342: 99.8511% ( 3) 00:41:00.628 19.437 - 19.532: 99.8589% ( 1) 00:41:00.628 19.627 - 19.721: 99.8667% ( 1) 00:41:00.628 19.816 - 19.911: 99.8746% ( 1) 00:41:00.628 20.101 - 20.196: 99.8824% ( 1) 00:41:00.628 22.566 - 22.661: 99.8903% ( 1) 00:41:00.628 23.230 - 23.324: 99.8981% ( 1) 00:41:00.628 23.324 - 23.419: 99.9059% ( 1) 00:41:00.628 24.841 - 25.031: 99.9138% ( 1) 00:41:00.628 29.393 - 29.582: 99.9216% ( 1) 00:41:00.628 34.892 - 35.081: 99.9295% ( 1) 00:41:00.628 3980.705 - 4004.978: 99.9765% ( 6) 00:41:00.628 4004.978 - 4029.250: 100.0000% ( 3) 00:41:00.628 00:41:00.628 Complete histogram 00:41:00.628 ================== 00:41:00.628 Range in us Cumulative Count 00:41:00.628 2.062 - 2.074: 0.6036% ( 77) 00:41:00.628 2.074 - 2.086: 25.0059% ( 3113) 00:41:00.628 2.086 - 2.098: 40.1348% ( 1930) 00:41:00.628 2.098 - 2.110: 43.3331% ( 408) 00:41:00.628 2.110 - 2.121: 54.5426% ( 1430) 00:41:00.628 2.121 - 2.133: 57.7957% ( 415) 00:41:00.628 2.133 - 2.145: 61.2213% ( 437) 00:41:00.628 2.145 - 2.157: 71.8351% ( 1354) 00:41:00.628 2.157 - 2.169: 75.4017% ( 455) 00:41:00.628 2.169 - 2.181: 77.0479% ( 210) 00:41:00.628 2.181 - 2.193: 80.2226% ( 405) 00:41:00.628 2.193 - 2.204: 81.1633% ( 120) 00:41:00.628 2.204 - 2.216: 82.4489% ( 164) 00:41:00.628 2.216 - 2.228: 86.3526% ( 498) 00:41:00.628 2.228 - 2.240: 89.2373% ( 368) 00:41:00.628 2.240 - 2.252: 90.6483% ( 180) 00:41:00.628 2.252 - 2.264: 92.3101% ( 212) 00:41:00.628 2.264 - 2.276: 92.7569% ( 57) 00:41:00.628 2.276 - 2.287: 93.0548% ( 38) 00:41:00.628 2.287 - 2.299: 93.5094% ( 58) 00:41:00.628 2.299 - 2.311: 94.1836% ( 86) 00:41:00.628 2.311 - 2.323: 94.7637% ( 74) 00:41:00.628 2.323 - 2.335: 94.8812% ( 15) 00:41:00.628 2.335 - 2.347: 94.9126% ( 4) 00:41:00.628 2.347 - 2.359: 94.9518% ( 5) 00:41:00.628 2.359 - 2.370: 94.9831% ( 4) 00:41:00.628 2.370 - 2.382: 95.1164% ( 17) 00:41:00.628 2.382 - 2.394: 95.5005% ( 49) 00:41:00.628 2.394 - 2.406: 95.8141% ( 40) 00:41:00.628 2.406 - 2.418: 95.9865% ( 22) 00:41:00.628 2.418 - 2.430: 96.2530% ( 34) 00:41:00.628 2.430 - 2.441: 96.5744% ( 41) 00:41:00.628 2.441 - 2.453: 96.8331% ( 33) 00:41:00.628 2.453 - 2.465: 97.0448% ( 27) 00:41:00.628 2.465 - 2.477: 97.2015% ( 20) 00:41:00.628 2.477 - 2.489: 97.3191% ( 15) 00:41:00.628 2.489 - 2.501: 97.3897% ( 9) 00:41:00.628 2.501 - 2.513: 97.5151% ( 16) 00:41:00.628 2.513 - 2.524: 97.6013% ( 11) 00:41:00.628 2.524 - 2.536: 97.7032% ( 13) 00:41:00.628 2.536 - 2.548: 97.7816% ( 10) 00:41:00.628 2.548 - 2.560: 97.8522% ( 9) 00:41:00.628 2.560 - 2.572: 97.8835% ( 4) 00:41:00.628 2.572 - 2.584: 97.9227% ( 5) 00:41:00.628 2.596 - 2.607: 97.9384% ( 2) 00:41:00.628 2.607 - 2.619: 97.9462% ( 1) 00:41:00.628 2.619 - 2.631: 97.9776% ( 4) 00:41:00.628 2.631 - 2.643: 97.9933% ( 2) 00:41:00.628 2.655 - 2.667: 98.0011% ( 1) 00:41:00.628 2.726 - 2.738: 98.0089% ( 1) 00:41:00.628 2.750 - 2.761: 98.0246% ( 2) 00:41:00.628 2.761 - 2.773: 98.0403% ( 2) 00:41:00.628 2.773 - 2.785: 98.0560% ( 2) 00:41:00.628 2.785 - 2.797: 98.0638% ( 1) 00:41:00.628 2.797 - 2.809: 98.0716% ( 1) 00:41:00.628 2.833 - 2.844: 98.0873% ( 2) 00:41:00.628 2.844 - 2.856: 98.1030% ( 2) 00:41:00.628 2.856 - 2.868: 98.1187% ( 2) 00:41:00.628 2.880 - 2.892: 9[2024-12-09 05:35:54.803761] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:41:00.887 8.1265% ( 1) 00:41:00.887 2.904 - 2.916: 98.1422% ( 2) 00:41:00.887 2.916 - 2.927: 98.1657% ( 3) 00:41:00.887 2.927 - 2.939: 98.1736% ( 1) 00:41:00.887 2.939 - 2.951: 98.1814% ( 1) 00:41:00.887 2.963 - 2.975: 98.1971% ( 2) 00:41:00.887 2.975 - 2.987: 98.2049% ( 1) 00:41:00.887 2.987 - 2.999: 98.2206% ( 2) 00:41:00.887 3.034 - 3.058: 98.2441% ( 3) 00:41:00.887 3.058 - 3.081: 98.2519% ( 1) 00:41:00.887 3.129 - 3.153: 98.2598% ( 1) 00:41:00.887 3.153 - 3.176: 98.2911% ( 4) 00:41:00.887 3.176 - 3.200: 98.2990% ( 1) 00:41:00.887 3.247 - 3.271: 98.3068% ( 1) 00:41:00.887 3.295 - 3.319: 98.3147% ( 1) 00:41:00.887 3.319 - 3.342: 98.3382% ( 3) 00:41:00.887 3.390 - 3.413: 98.3617% ( 3) 00:41:00.887 3.413 - 3.437: 98.3695% ( 1) 00:41:00.887 3.437 - 3.461: 98.3774% ( 1) 00:41:00.887 3.508 - 3.532: 98.4009% ( 3) 00:41:00.887 3.532 - 3.556: 98.4087% ( 1) 00:41:00.887 3.556 - 3.579: 98.4166% ( 1) 00:41:00.887 3.579 - 3.603: 98.4244% ( 1) 00:41:00.887 3.603 - 3.627: 98.4322% ( 1) 00:41:00.887 3.627 - 3.650: 98.4401% ( 1) 00:41:00.887 3.674 - 3.698: 98.4479% ( 1) 00:41:00.887 3.698 - 3.721: 98.4557% ( 1) 00:41:00.887 3.721 - 3.745: 98.4636% ( 1) 00:41:00.887 3.745 - 3.769: 98.4714% ( 1) 00:41:00.887 3.769 - 3.793: 98.4949% ( 3) 00:41:00.887 3.840 - 3.864: 98.5028% ( 1) 00:41:00.887 3.887 - 3.911: 98.5106% ( 1) 00:41:00.887 3.911 - 3.935: 98.5185% ( 1) 00:41:00.887 3.959 - 3.982: 98.5341% ( 2) 00:41:00.887 4.361 - 4.385: 98.5420% ( 1) 00:41:00.887 5.594 - 5.618: 98.5498% ( 1) 00:41:00.887 5.641 - 5.665: 98.5577% ( 1) 00:41:00.887 5.760 - 5.784: 98.5655% ( 1) 00:41:00.887 6.068 - 6.116: 98.5812% ( 2) 00:41:00.887 6.447 - 6.495: 98.5890% ( 1) 00:41:00.887 6.495 - 6.542: 98.5968% ( 1) 00:41:00.887 6.542 - 6.590: 98.6047% ( 1) 00:41:00.887 6.779 - 6.827: 98.6125% ( 1) 00:41:00.887 6.827 - 6.874: 98.6204% ( 1) 00:41:00.887 6.874 - 6.921: 98.6282% ( 1) 00:41:00.887 7.206 - 7.253: 98.6439% ( 2) 00:41:00.887 7.538 - 7.585: 98.6517% ( 1) 00:41:00.887 7.775 - 7.822: 98.6596% ( 1) 00:41:00.887 7.822 - 7.870: 98.6674% ( 1) 00:41:00.887 7.964 - 8.012: 98.6752% ( 1) 00:41:00.887 8.296 - 8.344: 98.6909% ( 2) 00:41:00.887 15.265 - 15.360: 98.6988% ( 1) 00:41:00.887 15.360 - 15.455: 98.7066% ( 1) 00:41:00.887 15.550 - 15.644: 98.7144% ( 1) 00:41:00.887 15.644 - 15.739: 98.7223% ( 1) 00:41:00.887 15.739 - 15.834: 98.7536% ( 4) 00:41:00.887 15.834 - 15.929: 98.7928% ( 5) 00:41:00.887 15.929 - 16.024: 98.8085% ( 2) 00:41:00.887 16.024 - 16.119: 98.8242% ( 2) 00:41:00.887 16.119 - 16.213: 98.8555% ( 4) 00:41:00.887 16.213 - 16.308: 98.8712% ( 2) 00:41:00.887 16.308 - 16.403: 98.9026% ( 4) 00:41:00.887 16.403 - 16.498: 98.9182% ( 2) 00:41:00.887 16.498 - 16.593: 98.9731% ( 7) 00:41:00.887 16.593 - 16.687: 99.0201% ( 6) 00:41:00.887 16.687 - 16.782: 99.0672% ( 6) 00:41:00.887 16.782 - 16.877: 99.1064% ( 5) 00:41:00.887 16.877 - 16.972: 99.1299% ( 3) 00:41:00.887 16.972 - 17.067: 99.1534% ( 3) 00:41:00.887 17.067 - 17.161: 99.1848% ( 4) 00:41:00.887 17.161 - 17.256: 99.2083% ( 3) 00:41:00.887 17.256 - 17.351: 99.2240% ( 2) 00:41:00.887 17.446 - 17.541: 99.2396% ( 2) 00:41:00.887 17.636 - 17.730: 99.2553% ( 2) 00:41:00.887 17.730 - 17.825: 99.2631% ( 1) 00:41:00.887 17.825 - 17.920: 99.2710% ( 1) 00:41:00.887 17.920 - 18.015: 99.2788% ( 1) 00:41:00.887 18.204 - 18.299: 99.2945% ( 2) 00:41:00.887 18.773 - 18.868: 99.3023% ( 1) 00:41:00.887 20.006 - 20.101: 99.3102% ( 1) 00:41:00.887 24.652 - 24.841: 99.3180% ( 1) 00:41:00.887 27.117 - 27.307: 99.3259% ( 1) 00:41:00.887 29.013 - 29.203: 99.3337% ( 1) 00:41:00.887 3907.887 - 3932.160: 99.3415% ( 1) 00:41:00.887 3980.705 - 4004.978: 99.8119% ( 60) 00:41:00.887 4004.978 - 4029.250: 100.0000% ( 24) 00:41:00.887 00:41:00.887 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:41:00.887 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:41:00.887 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:41:00.887 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:41:00.887 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:41:01.155 [ 00:41:01.155 { 00:41:01.155 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:01.155 "subtype": "Discovery", 00:41:01.155 "listen_addresses": [], 00:41:01.155 "allow_any_host": true, 00:41:01.156 "hosts": [] 00:41:01.156 }, 00:41:01.156 { 00:41:01.156 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:41:01.156 "subtype": "NVMe", 00:41:01.156 "listen_addresses": [ 00:41:01.156 { 00:41:01.156 "trtype": "VFIOUSER", 00:41:01.156 "adrfam": "IPv4", 00:41:01.156 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:41:01.156 "trsvcid": "0" 00:41:01.156 } 00:41:01.156 ], 00:41:01.156 "allow_any_host": true, 00:41:01.156 "hosts": [], 00:41:01.156 "serial_number": "SPDK1", 00:41:01.156 "model_number": "SPDK bdev Controller", 00:41:01.156 "max_namespaces": 32, 00:41:01.156 "min_cntlid": 1, 00:41:01.156 "max_cntlid": 65519, 00:41:01.156 "namespaces": [ 00:41:01.156 { 00:41:01.156 "nsid": 1, 00:41:01.156 "bdev_name": "Malloc1", 00:41:01.156 "name": "Malloc1", 00:41:01.156 "nguid": "A753EF4CB1A846C39CC015D494AE3BF0", 00:41:01.156 "uuid": "a753ef4c-b1a8-46c3-9cc0-15d494ae3bf0" 00:41:01.156 } 00:41:01.156 ] 00:41:01.156 }, 00:41:01.156 { 00:41:01.156 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:41:01.156 "subtype": "NVMe", 00:41:01.156 "listen_addresses": [ 00:41:01.156 { 00:41:01.156 "trtype": "VFIOUSER", 00:41:01.156 "adrfam": "IPv4", 00:41:01.156 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:41:01.156 "trsvcid": "0" 00:41:01.156 } 00:41:01.156 ], 00:41:01.156 "allow_any_host": true, 00:41:01.156 "hosts": [], 00:41:01.156 "serial_number": "SPDK2", 00:41:01.156 "model_number": "SPDK bdev Controller", 00:41:01.156 "max_namespaces": 32, 00:41:01.156 "min_cntlid": 1, 00:41:01.156 "max_cntlid": 65519, 00:41:01.156 "namespaces": [ 00:41:01.156 { 00:41:01.156 "nsid": 1, 00:41:01.156 "bdev_name": "Malloc2", 00:41:01.156 "name": "Malloc2", 00:41:01.156 "nguid": "94A3144672C44ADABB041D31DCBA901F", 00:41:01.156 "uuid": "94a31446-72c4-4ada-bb04-1d31dcba901f" 00:41:01.156 } 00:41:01.156 ] 00:41:01.156 } 00:41:01.156 ] 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=625904 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:41:01.156 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:41:01.156 [2024-12-09 05:35:55.356015] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:41:01.416 Malloc3 00:41:01.416 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:41:01.674 [2024-12-09 05:35:55.789260] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:41:01.674 05:35:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:41:01.674 Asynchronous Event Request test 00:41:01.674 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:41:01.674 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:41:01.674 Registering asynchronous event callbacks... 00:41:01.674 Starting namespace attribute notice tests for all controllers... 00:41:01.674 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:41:01.674 aer_cb - Changed Namespace 00:41:01.674 Cleaning up... 00:41:01.932 [ 00:41:01.932 { 00:41:01.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:01.932 "subtype": "Discovery", 00:41:01.932 "listen_addresses": [], 00:41:01.932 "allow_any_host": true, 00:41:01.932 "hosts": [] 00:41:01.932 }, 00:41:01.932 { 00:41:01.932 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:41:01.932 "subtype": "NVMe", 00:41:01.932 "listen_addresses": [ 00:41:01.932 { 00:41:01.932 "trtype": "VFIOUSER", 00:41:01.932 "adrfam": "IPv4", 00:41:01.932 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:41:01.932 "trsvcid": "0" 00:41:01.932 } 00:41:01.932 ], 00:41:01.932 "allow_any_host": true, 00:41:01.932 "hosts": [], 00:41:01.932 "serial_number": "SPDK1", 00:41:01.932 "model_number": "SPDK bdev Controller", 00:41:01.932 "max_namespaces": 32, 00:41:01.932 "min_cntlid": 1, 00:41:01.932 "max_cntlid": 65519, 00:41:01.932 "namespaces": [ 00:41:01.932 { 00:41:01.932 "nsid": 1, 00:41:01.932 "bdev_name": "Malloc1", 00:41:01.932 "name": "Malloc1", 00:41:01.932 "nguid": "A753EF4CB1A846C39CC015D494AE3BF0", 00:41:01.932 "uuid": "a753ef4c-b1a8-46c3-9cc0-15d494ae3bf0" 00:41:01.932 }, 00:41:01.932 { 00:41:01.932 "nsid": 2, 00:41:01.932 "bdev_name": "Malloc3", 00:41:01.932 "name": "Malloc3", 00:41:01.932 "nguid": "64A1CEA959AE46E481B4F87299B97509", 00:41:01.932 "uuid": "64a1cea9-59ae-46e4-81b4-f87299b97509" 00:41:01.932 } 00:41:01.932 ] 00:41:01.932 }, 00:41:01.932 { 00:41:01.932 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:41:01.932 "subtype": "NVMe", 00:41:01.932 "listen_addresses": [ 00:41:01.932 { 00:41:01.932 "trtype": "VFIOUSER", 00:41:01.932 "adrfam": "IPv4", 00:41:01.932 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:41:01.932 "trsvcid": "0" 00:41:01.932 } 00:41:01.932 ], 00:41:01.932 "allow_any_host": true, 00:41:01.932 "hosts": [], 00:41:01.932 "serial_number": "SPDK2", 00:41:01.932 "model_number": "SPDK bdev Controller", 00:41:01.932 "max_namespaces": 32, 00:41:01.932 "min_cntlid": 1, 00:41:01.932 "max_cntlid": 65519, 00:41:01.932 "namespaces": [ 00:41:01.932 { 00:41:01.932 "nsid": 1, 00:41:01.932 "bdev_name": "Malloc2", 00:41:01.932 "name": "Malloc2", 00:41:01.932 "nguid": "94A3144672C44ADABB041D31DCBA901F", 00:41:01.932 "uuid": "94a31446-72c4-4ada-bb04-1d31dcba901f" 00:41:01.932 } 00:41:01.932 ] 00:41:01.932 } 00:41:01.932 ] 00:41:01.932 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 625904 00:41:01.932 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:41:01.932 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:41:01.932 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:41:01.932 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:41:01.932 [2024-12-09 05:35:56.091902] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:41:01.932 [2024-12-09 05:35:56.091938] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625925 ] 00:41:01.932 [2024-12-09 05:35:56.141378] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:41:01.932 [2024-12-09 05:35:56.146717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:41:01.932 [2024-12-09 05:35:56.146751] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdee9d71000 00:41:01.932 [2024-12-09 05:35:56.147715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.148731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.149731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.150744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.151739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.152780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.153763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.154769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:41:01.932 [2024-12-09 05:35:56.155771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:41:01.932 [2024-12-09 05:35:56.155794] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdee9d66000 00:41:02.192 [2024-12-09 05:35:56.157041] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:41:02.192 [2024-12-09 05:35:56.172011] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:41:02.192 [2024-12-09 05:35:56.172053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:41:02.192 [2024-12-09 05:35:56.174146] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:41:02.192 [2024-12-09 05:35:56.174204] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:41:02.192 [2024-12-09 05:35:56.174320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:41:02.192 [2024-12-09 05:35:56.174356] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:41:02.192 [2024-12-09 05:35:56.174367] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:41:02.192 [2024-12-09 05:35:56.175156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:41:02.192 [2024-12-09 05:35:56.175181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:41:02.192 [2024-12-09 05:35:56.175195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:41:02.192 [2024-12-09 05:35:56.176163] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:41:02.192 [2024-12-09 05:35:56.176184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:41:02.192 [2024-12-09 05:35:56.176198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.177167] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:41:02.192 [2024-12-09 05:35:56.177188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.179285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:41:02.192 [2024-12-09 05:35:56.179306] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:41:02.192 [2024-12-09 05:35:56.179316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.179328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.179438] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:41:02.192 [2024-12-09 05:35:56.179447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.179455] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:41:02.192 [2024-12-09 05:35:56.180181] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:41:02.192 [2024-12-09 05:35:56.181186] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:41:02.192 [2024-12-09 05:35:56.182192] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:41:02.192 [2024-12-09 05:35:56.183186] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:02.192 [2024-12-09 05:35:56.183279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:41:02.192 [2024-12-09 05:35:56.184202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:41:02.192 [2024-12-09 05:35:56.184222] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:41:02.192 [2024-12-09 05:35:56.184232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.184278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:41:02.192 [2024-12-09 05:35:56.184296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.184321] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:41:02.192 [2024-12-09 05:35:56.184332] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:41:02.192 [2024-12-09 05:35:56.184339] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.192 [2024-12-09 05:35:56.184357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:41:02.192 [2024-12-09 05:35:56.190285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:41:02.192 [2024-12-09 05:35:56.190309] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:41:02.192 [2024-12-09 05:35:56.190319] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:41:02.192 [2024-12-09 05:35:56.190327] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:41:02.192 [2024-12-09 05:35:56.190335] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:41:02.192 [2024-12-09 05:35:56.190343] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:41:02.192 [2024-12-09 05:35:56.190351] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:41:02.192 [2024-12-09 05:35:56.190359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.190373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.190389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:41:02.192 [2024-12-09 05:35:56.198288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:41:02.192 [2024-12-09 05:35:56.198312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.192 [2024-12-09 05:35:56.198327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.192 [2024-12-09 05:35:56.198340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.192 [2024-12-09 05:35:56.198353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.192 [2024-12-09 05:35:56.198366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.198384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.198400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:41:02.192 [2024-12-09 05:35:56.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:41:02.192 [2024-12-09 05:35:56.206303] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:41:02.192 [2024-12-09 05:35:56.206312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.206329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:41:02.192 [2024-12-09 05:35:56.206340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.206354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.214363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.214381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.214394] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:41:02.193 [2024-12-09 05:35:56.214403] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:41:02.193 [2024-12-09 05:35:56.214410] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.214420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.222287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.222315] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:41:02.193 [2024-12-09 05:35:56.222336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.222351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.222364] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:41:02.193 [2024-12-09 05:35:56.222372] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:41:02.193 [2024-12-09 05:35:56.222378] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.222388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.230289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.230313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.230333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.230348] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:41:02.193 [2024-12-09 05:35:56.230357] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:41:02.193 [2024-12-09 05:35:56.230363] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.230372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.238287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.238315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238380] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:41:02.193 [2024-12-09 05:35:56.238388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:41:02.193 [2024-12-09 05:35:56.238397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:41:02.193 [2024-12-09 05:35:56.238421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.246282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.246310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.254299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.254326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.262312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.270286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.270330] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:41:02.193 [2024-12-09 05:35:56.270342] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:41:02.193 [2024-12-09 05:35:56.270352] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:41:02.193 [2024-12-09 05:35:56.270359] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:41:02.193 [2024-12-09 05:35:56.270365] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:41:02.193 [2024-12-09 05:35:56.270375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:41:02.193 [2024-12-09 05:35:56.270389] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:41:02.193 [2024-12-09 05:35:56.270398] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:41:02.193 [2024-12-09 05:35:56.270404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.270413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.270425] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:41:02.193 [2024-12-09 05:35:56.270433] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:41:02.193 [2024-12-09 05:35:56.270440] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.270448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.270462] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:41:02.193 [2024-12-09 05:35:56.270471] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:41:02.193 [2024-12-09 05:35:56.270477] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:41:02.193 [2024-12-09 05:35:56.270486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:41:02.193 [2024-12-09 05:35:56.278285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.278314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.278337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:41:02.193 [2024-12-09 05:35:56.278350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:41:02.193 ===================================================== 00:41:02.193 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:41:02.193 ===================================================== 00:41:02.193 Controller Capabilities/Features 00:41:02.193 ================================ 00:41:02.193 Vendor ID: 4e58 00:41:02.193 Subsystem Vendor ID: 4e58 00:41:02.193 Serial Number: SPDK2 00:41:02.193 Model Number: SPDK bdev Controller 00:41:02.193 Firmware Version: 25.01 00:41:02.193 Recommended Arb Burst: 6 00:41:02.193 IEEE OUI Identifier: 8d 6b 50 00:41:02.193 Multi-path I/O 00:41:02.193 May have multiple subsystem ports: Yes 00:41:02.193 May have multiple controllers: Yes 00:41:02.193 Associated with SR-IOV VF: No 00:41:02.193 Max Data Transfer Size: 131072 00:41:02.193 Max Number of Namespaces: 32 00:41:02.193 Max Number of I/O Queues: 127 00:41:02.193 NVMe Specification Version (VS): 1.3 00:41:02.193 NVMe Specification Version (Identify): 1.3 00:41:02.193 Maximum Queue Entries: 256 00:41:02.193 Contiguous Queues Required: Yes 00:41:02.193 Arbitration Mechanisms Supported 00:41:02.193 Weighted Round Robin: Not Supported 00:41:02.193 Vendor Specific: Not Supported 00:41:02.193 Reset Timeout: 15000 ms 00:41:02.193 Doorbell Stride: 4 bytes 00:41:02.193 NVM Subsystem Reset: Not Supported 00:41:02.193 Command Sets Supported 00:41:02.193 NVM Command Set: Supported 00:41:02.193 Boot Partition: Not Supported 00:41:02.193 Memory Page Size Minimum: 4096 bytes 00:41:02.193 Memory Page Size Maximum: 4096 bytes 00:41:02.193 Persistent Memory Region: Not Supported 00:41:02.193 Optional Asynchronous Events Supported 00:41:02.193 Namespace Attribute Notices: Supported 00:41:02.193 Firmware Activation Notices: Not Supported 00:41:02.193 ANA Change Notices: Not Supported 00:41:02.193 PLE Aggregate Log Change Notices: Not Supported 00:41:02.193 LBA Status Info Alert Notices: Not Supported 00:41:02.193 EGE Aggregate Log Change Notices: Not Supported 00:41:02.193 Normal NVM Subsystem Shutdown event: Not Supported 00:41:02.194 Zone Descriptor Change Notices: Not Supported 00:41:02.194 Discovery Log Change Notices: Not Supported 00:41:02.194 Controller Attributes 00:41:02.194 128-bit Host Identifier: Supported 00:41:02.194 Non-Operational Permissive Mode: Not Supported 00:41:02.194 NVM Sets: Not Supported 00:41:02.194 Read Recovery Levels: Not Supported 00:41:02.194 Endurance Groups: Not Supported 00:41:02.194 Predictable Latency Mode: Not Supported 00:41:02.194 Traffic Based Keep ALive: Not Supported 00:41:02.194 Namespace Granularity: Not Supported 00:41:02.194 SQ Associations: Not Supported 00:41:02.194 UUID List: Not Supported 00:41:02.194 Multi-Domain Subsystem: Not Supported 00:41:02.194 Fixed Capacity Management: Not Supported 00:41:02.194 Variable Capacity Management: Not Supported 00:41:02.194 Delete Endurance Group: Not Supported 00:41:02.194 Delete NVM Set: Not Supported 00:41:02.194 Extended LBA Formats Supported: Not Supported 00:41:02.194 Flexible Data Placement Supported: Not Supported 00:41:02.194 00:41:02.194 Controller Memory Buffer Support 00:41:02.194 ================================ 00:41:02.194 Supported: No 00:41:02.194 00:41:02.194 Persistent Memory Region Support 00:41:02.194 ================================ 00:41:02.194 Supported: No 00:41:02.194 00:41:02.194 Admin Command Set Attributes 00:41:02.194 ============================ 00:41:02.194 Security Send/Receive: Not Supported 00:41:02.194 Format NVM: Not Supported 00:41:02.194 Firmware Activate/Download: Not Supported 00:41:02.194 Namespace Management: Not Supported 00:41:02.194 Device Self-Test: Not Supported 00:41:02.194 Directives: Not Supported 00:41:02.194 NVMe-MI: Not Supported 00:41:02.194 Virtualization Management: Not Supported 00:41:02.194 Doorbell Buffer Config: Not Supported 00:41:02.194 Get LBA Status Capability: Not Supported 00:41:02.194 Command & Feature Lockdown Capability: Not Supported 00:41:02.194 Abort Command Limit: 4 00:41:02.194 Async Event Request Limit: 4 00:41:02.194 Number of Firmware Slots: N/A 00:41:02.194 Firmware Slot 1 Read-Only: N/A 00:41:02.194 Firmware Activation Without Reset: N/A 00:41:02.194 Multiple Update Detection Support: N/A 00:41:02.194 Firmware Update Granularity: No Information Provided 00:41:02.194 Per-Namespace SMART Log: No 00:41:02.194 Asymmetric Namespace Access Log Page: Not Supported 00:41:02.194 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:41:02.194 Command Effects Log Page: Supported 00:41:02.194 Get Log Page Extended Data: Supported 00:41:02.194 Telemetry Log Pages: Not Supported 00:41:02.194 Persistent Event Log Pages: Not Supported 00:41:02.194 Supported Log Pages Log Page: May Support 00:41:02.194 Commands Supported & Effects Log Page: Not Supported 00:41:02.194 Feature Identifiers & Effects Log Page:May Support 00:41:02.194 NVMe-MI Commands & Effects Log Page: May Support 00:41:02.194 Data Area 4 for Telemetry Log: Not Supported 00:41:02.194 Error Log Page Entries Supported: 128 00:41:02.194 Keep Alive: Supported 00:41:02.194 Keep Alive Granularity: 10000 ms 00:41:02.194 00:41:02.194 NVM Command Set Attributes 00:41:02.194 ========================== 00:41:02.194 Submission Queue Entry Size 00:41:02.194 Max: 64 00:41:02.194 Min: 64 00:41:02.194 Completion Queue Entry Size 00:41:02.194 Max: 16 00:41:02.194 Min: 16 00:41:02.194 Number of Namespaces: 32 00:41:02.194 Compare Command: Supported 00:41:02.194 Write Uncorrectable Command: Not Supported 00:41:02.194 Dataset Management Command: Supported 00:41:02.194 Write Zeroes Command: Supported 00:41:02.194 Set Features Save Field: Not Supported 00:41:02.194 Reservations: Not Supported 00:41:02.194 Timestamp: Not Supported 00:41:02.194 Copy: Supported 00:41:02.194 Volatile Write Cache: Present 00:41:02.194 Atomic Write Unit (Normal): 1 00:41:02.194 Atomic Write Unit (PFail): 1 00:41:02.194 Atomic Compare & Write Unit: 1 00:41:02.194 Fused Compare & Write: Supported 00:41:02.194 Scatter-Gather List 00:41:02.194 SGL Command Set: Supported (Dword aligned) 00:41:02.194 SGL Keyed: Not Supported 00:41:02.194 SGL Bit Bucket Descriptor: Not Supported 00:41:02.194 SGL Metadata Pointer: Not Supported 00:41:02.194 Oversized SGL: Not Supported 00:41:02.194 SGL Metadata Address: Not Supported 00:41:02.194 SGL Offset: Not Supported 00:41:02.194 Transport SGL Data Block: Not Supported 00:41:02.194 Replay Protected Memory Block: Not Supported 00:41:02.194 00:41:02.194 Firmware Slot Information 00:41:02.194 ========================= 00:41:02.194 Active slot: 1 00:41:02.194 Slot 1 Firmware Revision: 25.01 00:41:02.194 00:41:02.194 00:41:02.194 Commands Supported and Effects 00:41:02.194 ============================== 00:41:02.194 Admin Commands 00:41:02.194 -------------- 00:41:02.194 Get Log Page (02h): Supported 00:41:02.194 Identify (06h): Supported 00:41:02.194 Abort (08h): Supported 00:41:02.194 Set Features (09h): Supported 00:41:02.194 Get Features (0Ah): Supported 00:41:02.194 Asynchronous Event Request (0Ch): Supported 00:41:02.194 Keep Alive (18h): Supported 00:41:02.194 I/O Commands 00:41:02.194 ------------ 00:41:02.194 Flush (00h): Supported LBA-Change 00:41:02.194 Write (01h): Supported LBA-Change 00:41:02.194 Read (02h): Supported 00:41:02.194 Compare (05h): Supported 00:41:02.194 Write Zeroes (08h): Supported LBA-Change 00:41:02.194 Dataset Management (09h): Supported LBA-Change 00:41:02.194 Copy (19h): Supported LBA-Change 00:41:02.194 00:41:02.194 Error Log 00:41:02.194 ========= 00:41:02.194 00:41:02.194 Arbitration 00:41:02.194 =========== 00:41:02.194 Arbitration Burst: 1 00:41:02.194 00:41:02.194 Power Management 00:41:02.194 ================ 00:41:02.194 Number of Power States: 1 00:41:02.194 Current Power State: Power State #0 00:41:02.194 Power State #0: 00:41:02.194 Max Power: 0.00 W 00:41:02.194 Non-Operational State: Operational 00:41:02.194 Entry Latency: Not Reported 00:41:02.194 Exit Latency: Not Reported 00:41:02.194 Relative Read Throughput: 0 00:41:02.194 Relative Read Latency: 0 00:41:02.194 Relative Write Throughput: 0 00:41:02.194 Relative Write Latency: 0 00:41:02.194 Idle Power: Not Reported 00:41:02.194 Active Power: Not Reported 00:41:02.194 Non-Operational Permissive Mode: Not Supported 00:41:02.194 00:41:02.194 Health Information 00:41:02.194 ================== 00:41:02.194 Critical Warnings: 00:41:02.194 Available Spare Space: OK 00:41:02.194 Temperature: OK 00:41:02.194 Device Reliability: OK 00:41:02.194 Read Only: No 00:41:02.194 Volatile Memory Backup: OK 00:41:02.194 Current Temperature: 0 Kelvin (-273 Celsius) 00:41:02.194 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:41:02.194 Available Spare: 0% 00:41:02.194 Available Sp[2024-12-09 05:35:56.278479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:41:02.194 [2024-12-09 05:35:56.283283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:41:02.194 [2024-12-09 05:35:56.283339] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:41:02.194 [2024-12-09 05:35:56.283357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.194 [2024-12-09 05:35:56.283368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.194 [2024-12-09 05:35:56.283378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.194 [2024-12-09 05:35:56.283387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.194 [2024-12-09 05:35:56.283469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:41:02.194 [2024-12-09 05:35:56.283491] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:41:02.194 [2024-12-09 05:35:56.284473] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:02.194 [2024-12-09 05:35:56.284546] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:41:02.194 [2024-12-09 05:35:56.284562] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:41:02.194 [2024-12-09 05:35:56.285485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:41:02.194 [2024-12-09 05:35:56.285509] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:41:02.194 [2024-12-09 05:35:56.285576] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:41:02.194 [2024-12-09 05:35:56.286784] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:41:02.194 are Threshold: 0% 00:41:02.194 Life Percentage Used: 0% 00:41:02.194 Data Units Read: 0 00:41:02.194 Data Units Written: 0 00:41:02.194 Host Read Commands: 0 00:41:02.195 Host Write Commands: 0 00:41:02.195 Controller Busy Time: 0 minutes 00:41:02.195 Power Cycles: 0 00:41:02.195 Power On Hours: 0 hours 00:41:02.195 Unsafe Shutdowns: 0 00:41:02.195 Unrecoverable Media Errors: 0 00:41:02.195 Lifetime Error Log Entries: 0 00:41:02.195 Warning Temperature Time: 0 minutes 00:41:02.195 Critical Temperature Time: 0 minutes 00:41:02.195 00:41:02.195 Number of Queues 00:41:02.195 ================ 00:41:02.195 Number of I/O Submission Queues: 127 00:41:02.195 Number of I/O Completion Queues: 127 00:41:02.195 00:41:02.195 Active Namespaces 00:41:02.195 ================= 00:41:02.195 Namespace ID:1 00:41:02.195 Error Recovery Timeout: Unlimited 00:41:02.195 Command Set Identifier: NVM (00h) 00:41:02.195 Deallocate: Supported 00:41:02.195 Deallocated/Unwritten Error: Not Supported 00:41:02.195 Deallocated Read Value: Unknown 00:41:02.195 Deallocate in Write Zeroes: Not Supported 00:41:02.195 Deallocated Guard Field: 0xFFFF 00:41:02.195 Flush: Supported 00:41:02.195 Reservation: Supported 00:41:02.195 Namespace Sharing Capabilities: Multiple Controllers 00:41:02.195 Size (in LBAs): 131072 (0GiB) 00:41:02.195 Capacity (in LBAs): 131072 (0GiB) 00:41:02.195 Utilization (in LBAs): 131072 (0GiB) 00:41:02.195 NGUID: 94A3144672C44ADABB041D31DCBA901F 00:41:02.195 UUID: 94a31446-72c4-4ada-bb04-1d31dcba901f 00:41:02.195 Thin Provisioning: Not Supported 00:41:02.195 Per-NS Atomic Units: Yes 00:41:02.195 Atomic Boundary Size (Normal): 0 00:41:02.195 Atomic Boundary Size (PFail): 0 00:41:02.195 Atomic Boundary Offset: 0 00:41:02.195 Maximum Single Source Range Length: 65535 00:41:02.195 Maximum Copy Length: 65535 00:41:02.195 Maximum Source Range Count: 1 00:41:02.195 NGUID/EUI64 Never Reused: No 00:41:02.195 Namespace Write Protected: No 00:41:02.195 Number of LBA Formats: 1 00:41:02.195 Current LBA Format: LBA Format #00 00:41:02.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:41:02.195 00:41:02.195 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:41:02.452 [2024-12-09 05:35:56.620138] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:07.709 Initializing NVMe Controllers 00:41:07.709 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:41:07.709 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:41:07.709 Initialization complete. Launching workers. 00:41:07.709 ======================================================== 00:41:07.709 Latency(us) 00:41:07.709 Device Information : IOPS MiB/s Average min max 00:41:07.709 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30650.55 119.73 4175.28 1233.13 10265.19 00:41:07.709 ======================================================== 00:41:07.709 Total : 30650.55 119.73 4175.28 1233.13 10265.19 00:41:07.709 00:41:07.709 [2024-12-09 05:36:01.725652] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:07.709 05:36:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:41:07.967 [2024-12-09 05:36:02.067662] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:13.232 Initializing NVMe Controllers 00:41:13.232 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:41:13.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:41:13.232 Initialization complete. Launching workers. 00:41:13.232 ======================================================== 00:41:13.232 Latency(us) 00:41:13.232 Device Information : IOPS MiB/s Average min max 00:41:13.232 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29551.09 115.43 4330.93 1228.66 9459.52 00:41:13.232 ======================================================== 00:41:13.232 Total : 29551.09 115.43 4330.93 1228.66 9459.52 00:41:13.232 00:41:13.232 [2024-12-09 05:36:07.086247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:13.232 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:41:13.232 [2024-12-09 05:36:07.401999] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:18.493 [2024-12-09 05:36:12.543452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:18.493 Initializing NVMe Controllers 00:41:18.493 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:41:18.493 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:41:18.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:41:18.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:41:18.493 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:41:18.493 Initialization complete. Launching workers. 00:41:18.493 Starting thread on core 2 00:41:18.493 Starting thread on core 3 00:41:18.493 Starting thread on core 1 00:41:18.493 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:41:18.749 [2024-12-09 05:36:12.955802] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:22.931 [2024-12-09 05:36:16.392744] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:22.931 Initializing NVMe Controllers 00:41:22.931 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:41:22.931 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:41:22.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:41:22.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:41:22.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:41:22.931 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:41:22.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:41:22.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:41:22.931 Initialization complete. Launching workers. 00:41:22.931 Starting thread on core 1 with urgent priority queue 00:41:22.931 Starting thread on core 2 with urgent priority queue 00:41:22.931 Starting thread on core 3 with urgent priority queue 00:41:22.931 Starting thread on core 0 with urgent priority queue 00:41:22.931 SPDK bdev Controller (SPDK2 ) core 0: 5745.00 IO/s 17.41 secs/100000 ios 00:41:22.931 SPDK bdev Controller (SPDK2 ) core 1: 6459.33 IO/s 15.48 secs/100000 ios 00:41:22.931 SPDK bdev Controller (SPDK2 ) core 2: 5941.33 IO/s 16.83 secs/100000 ios 00:41:22.931 SPDK bdev Controller (SPDK2 ) core 3: 4728.67 IO/s 21.15 secs/100000 ios 00:41:22.931 ======================================================== 00:41:22.931 00:41:22.931 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:41:22.931 [2024-12-09 05:36:16.799828] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:22.931 Initializing NVMe Controllers 00:41:22.931 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:41:22.931 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:41:22.931 Namespace ID: 1 size: 0GB 00:41:22.931 Initialization complete. 00:41:22.931 INFO: using host memory buffer for IO 00:41:22.931 Hello world! 00:41:22.931 [2024-12-09 05:36:16.808881] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:22.931 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:41:23.190 [2024-12-09 05:36:17.201987] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:24.125 Initializing NVMe Controllers 00:41:24.125 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:41:24.125 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:41:24.125 Initialization complete. Launching workers. 00:41:24.125 submit (in ns) avg, min, max = 7033.7, 3518.9, 4003694.4 00:41:24.125 complete (in ns) avg, min, max = 31507.3, 2074.4, 4998730.0 00:41:24.125 00:41:24.125 Submit histogram 00:41:24.125 ================ 00:41:24.125 Range in us Cumulative Count 00:41:24.125 3.508 - 3.532: 0.0392% ( 5) 00:41:24.125 3.532 - 3.556: 0.4550% ( 53) 00:41:24.125 3.556 - 3.579: 1.5768% ( 143) 00:41:24.125 3.579 - 3.603: 4.1657% ( 330) 00:41:24.125 3.603 - 3.627: 9.2100% ( 643) 00:41:24.125 3.627 - 3.650: 16.9765% ( 990) 00:41:24.125 3.650 - 3.674: 25.2138% ( 1050) 00:41:24.125 3.674 - 3.698: 34.3061% ( 1159) 00:41:24.125 3.698 - 3.721: 43.0140% ( 1110) 00:41:24.125 3.721 - 3.745: 50.4511% ( 948) 00:41:24.125 3.745 - 3.769: 55.9975% ( 707) 00:41:24.125 3.769 - 3.793: 61.4498% ( 695) 00:41:24.125 3.793 - 3.816: 65.3801% ( 501) 00:41:24.125 3.816 - 3.840: 69.4909% ( 524) 00:41:24.125 3.840 - 3.864: 72.7622% ( 417) 00:41:24.125 3.864 - 3.887: 75.9551% ( 407) 00:41:24.125 3.887 - 3.911: 79.6187% ( 467) 00:41:24.125 3.911 - 3.935: 82.9764% ( 428) 00:41:24.125 3.935 - 3.959: 85.5339% ( 326) 00:41:24.125 3.959 - 3.982: 87.5500% ( 257) 00:41:24.125 3.982 - 4.006: 89.5113% ( 250) 00:41:24.125 4.006 - 4.030: 90.9783% ( 187) 00:41:24.125 4.030 - 4.053: 92.1236% ( 146) 00:41:24.125 4.053 - 4.077: 93.2690% ( 146) 00:41:24.125 4.077 - 4.101: 94.1320% ( 110) 00:41:24.125 4.101 - 4.124: 94.7831% ( 83) 00:41:24.125 4.124 - 4.148: 95.3322% ( 70) 00:41:24.125 4.148 - 4.172: 95.7245% ( 50) 00:41:24.125 4.172 - 4.196: 96.0069% ( 36) 00:41:24.125 4.196 - 4.219: 96.1952% ( 24) 00:41:24.125 4.219 - 4.243: 96.3521% ( 20) 00:41:24.125 4.243 - 4.267: 96.4462% ( 12) 00:41:24.125 4.267 - 4.290: 96.5011% ( 7) 00:41:24.125 4.290 - 4.314: 96.5482% ( 6) 00:41:24.125 4.314 - 4.338: 96.6110% ( 8) 00:41:24.125 4.338 - 4.361: 96.7443% ( 17) 00:41:24.125 4.361 - 4.385: 96.7992% ( 7) 00:41:24.125 4.385 - 4.409: 96.9091% ( 14) 00:41:24.125 4.409 - 4.433: 96.9640% ( 7) 00:41:24.125 4.433 - 4.456: 96.9954% ( 4) 00:41:24.125 4.456 - 4.480: 97.0424% ( 6) 00:41:24.125 4.480 - 4.504: 97.0581% ( 2) 00:41:24.125 4.504 - 4.527: 97.0738% ( 2) 00:41:24.125 4.527 - 4.551: 97.0895% ( 2) 00:41:24.125 4.551 - 4.575: 97.0974% ( 1) 00:41:24.125 4.575 - 4.599: 97.1209% ( 3) 00:41:24.125 4.599 - 4.622: 97.1366% ( 2) 00:41:24.125 4.646 - 4.670: 97.1523% ( 2) 00:41:24.125 4.670 - 4.693: 97.1601% ( 1) 00:41:24.125 4.693 - 4.717: 97.1680% ( 1) 00:41:24.125 4.717 - 4.741: 97.1993% ( 4) 00:41:24.125 4.741 - 4.764: 97.2543% ( 7) 00:41:24.125 4.764 - 4.788: 97.2856% ( 4) 00:41:24.125 4.788 - 4.812: 97.3327% ( 6) 00:41:24.125 4.812 - 4.836: 97.3719% ( 5) 00:41:24.125 4.836 - 4.859: 97.4190% ( 6) 00:41:24.125 4.859 - 4.883: 97.5288% ( 14) 00:41:24.125 4.883 - 4.907: 97.5602% ( 4) 00:41:24.125 4.907 - 4.930: 97.5916% ( 4) 00:41:24.125 4.930 - 4.954: 97.6465% ( 7) 00:41:24.125 4.954 - 4.978: 97.7014% ( 7) 00:41:24.125 4.978 - 5.001: 97.7642% ( 8) 00:41:24.125 5.001 - 5.025: 97.8426% ( 10) 00:41:24.125 5.025 - 5.049: 97.9054% ( 8) 00:41:24.125 5.049 - 5.073: 97.9446% ( 5) 00:41:24.125 5.073 - 5.096: 97.9760% ( 4) 00:41:24.125 5.096 - 5.120: 97.9917% ( 2) 00:41:24.125 5.120 - 5.144: 98.0388% ( 6) 00:41:24.125 5.144 - 5.167: 98.0544% ( 2) 00:41:24.125 5.167 - 5.191: 98.0780% ( 3) 00:41:24.125 5.191 - 5.215: 98.1094% ( 4) 00:41:24.125 5.215 - 5.239: 98.1486% ( 5) 00:41:24.125 5.239 - 5.262: 98.1721% ( 3) 00:41:24.125 5.262 - 5.286: 98.1957% ( 3) 00:41:24.125 5.286 - 5.310: 98.2035% ( 1) 00:41:24.125 5.310 - 5.333: 98.2270% ( 3) 00:41:24.125 5.357 - 5.381: 98.2427% ( 2) 00:41:24.125 5.381 - 5.404: 98.2506% ( 1) 00:41:24.125 5.404 - 5.428: 98.2584% ( 1) 00:41:24.125 5.428 - 5.452: 98.2663% ( 1) 00:41:24.125 5.547 - 5.570: 98.2741% ( 1) 00:41:24.125 5.618 - 5.641: 98.2819% ( 1) 00:41:24.125 5.665 - 5.689: 98.2898% ( 1) 00:41:24.125 5.689 - 5.713: 98.2976% ( 1) 00:41:24.125 5.713 - 5.736: 98.3055% ( 1) 00:41:24.125 5.807 - 5.831: 98.3133% ( 1) 00:41:24.125 5.879 - 5.902: 98.3212% ( 1) 00:41:24.125 6.116 - 6.163: 98.3290% ( 1) 00:41:24.125 6.210 - 6.258: 98.3369% ( 1) 00:41:24.125 6.305 - 6.353: 98.3447% ( 1) 00:41:24.125 6.495 - 6.542: 98.3526% ( 1) 00:41:24.125 7.064 - 7.111: 98.3604% ( 1) 00:41:24.125 7.111 - 7.159: 98.3682% ( 1) 00:41:24.125 7.159 - 7.206: 98.3761% ( 1) 00:41:24.125 7.206 - 7.253: 98.3839% ( 1) 00:41:24.125 7.253 - 7.301: 98.3918% ( 1) 00:41:24.125 7.348 - 7.396: 98.3996% ( 1) 00:41:24.125 7.490 - 7.538: 98.4232% ( 3) 00:41:24.125 7.538 - 7.585: 98.4388% ( 2) 00:41:24.125 7.680 - 7.727: 98.4545% ( 2) 00:41:24.125 7.822 - 7.870: 98.4624% ( 1) 00:41:24.125 7.964 - 8.012: 98.4702% ( 1) 00:41:24.125 8.012 - 8.059: 98.4781% ( 1) 00:41:24.125 8.059 - 8.107: 98.4859% ( 1) 00:41:24.125 8.107 - 8.154: 98.4938% ( 1) 00:41:24.125 8.154 - 8.201: 98.5016% ( 1) 00:41:24.125 8.296 - 8.344: 98.5095% ( 1) 00:41:24.125 8.486 - 8.533: 98.5173% ( 1) 00:41:24.125 8.533 - 8.581: 98.5251% ( 1) 00:41:24.125 8.581 - 8.628: 98.5330% ( 1) 00:41:24.125 8.676 - 8.723: 98.5487% ( 2) 00:41:24.125 8.865 - 8.913: 98.5565% ( 1) 00:41:24.125 8.913 - 8.960: 98.5644% ( 1) 00:41:24.125 9.007 - 9.055: 98.5722% ( 1) 00:41:24.125 9.102 - 9.150: 98.5879% ( 2) 00:41:24.125 9.197 - 9.244: 98.5957% ( 1) 00:41:24.125 9.244 - 9.292: 98.6036% ( 1) 00:41:24.125 9.292 - 9.339: 98.6114% ( 1) 00:41:24.125 9.481 - 9.529: 98.6193% ( 1) 00:41:24.125 9.529 - 9.576: 98.6350% ( 2) 00:41:24.125 9.671 - 9.719: 98.6507% ( 2) 00:41:24.125 9.719 - 9.766: 98.6585% ( 1) 00:41:24.125 9.813 - 9.861: 98.6664% ( 1) 00:41:24.125 10.003 - 10.050: 98.6742% ( 1) 00:41:24.125 10.145 - 10.193: 98.6820% ( 1) 00:41:24.125 10.335 - 10.382: 98.6899% ( 1) 00:41:24.125 10.430 - 10.477: 98.6977% ( 1) 00:41:24.125 10.619 - 10.667: 98.7056% ( 1) 00:41:24.125 11.141 - 11.188: 98.7134% ( 1) 00:41:24.125 11.188 - 11.236: 98.7213% ( 1) 00:41:24.125 11.330 - 11.378: 98.7291% ( 1) 00:41:24.125 11.378 - 11.425: 98.7370% ( 1) 00:41:24.125 11.425 - 11.473: 98.7448% ( 1) 00:41:24.125 11.473 - 11.520: 98.7526% ( 1) 00:41:24.125 11.710 - 11.757: 98.7762% ( 3) 00:41:24.125 11.757 - 11.804: 98.7840% ( 1) 00:41:24.126 11.899 - 11.947: 98.7919% ( 1) 00:41:24.126 11.994 - 12.041: 98.8076% ( 2) 00:41:24.126 12.041 - 12.089: 98.8154% ( 1) 00:41:24.126 12.089 - 12.136: 98.8233% ( 1) 00:41:24.126 12.136 - 12.231: 98.8311% ( 1) 00:41:24.126 12.231 - 12.326: 98.8389% ( 1) 00:41:24.126 12.326 - 12.421: 98.8546% ( 2) 00:41:24.126 12.516 - 12.610: 98.8625% ( 1) 00:41:24.126 12.705 - 12.800: 98.8782% ( 2) 00:41:24.126 13.084 - 13.179: 98.8860% ( 1) 00:41:24.126 13.559 - 13.653: 98.8939% ( 1) 00:41:24.126 13.843 - 13.938: 98.9017% ( 1) 00:41:24.126 13.938 - 14.033: 98.9095% ( 1) 00:41:24.126 14.033 - 14.127: 98.9174% ( 1) 00:41:24.126 14.127 - 14.222: 98.9252% ( 1) 00:41:24.126 14.222 - 14.317: 98.9331% ( 1) 00:41:24.126 14.412 - 14.507: 98.9409% ( 1) 00:41:24.126 14.507 - 14.601: 98.9488% ( 1) 00:41:24.126 14.886 - 14.981: 98.9566% ( 1) 00:41:24.126 16.687 - 16.782: 98.9645% ( 1) 00:41:24.126 17.161 - 17.256: 98.9723% ( 1) 00:41:24.126 17.256 - 17.351: 98.9958% ( 3) 00:41:24.126 17.351 - 17.446: 99.0272% ( 4) 00:41:24.126 17.446 - 17.541: 99.0429% ( 2) 00:41:24.126 17.541 - 17.636: 99.0664% ( 3) 00:41:24.126 17.636 - 17.730: 99.1214% ( 7) 00:41:24.126 17.730 - 17.825: 99.1841% ( 8) 00:41:24.126 17.825 - 17.920: 99.2626% ( 10) 00:41:24.126 17.920 - 18.015: 99.3489% ( 11) 00:41:24.126 18.015 - 18.110: 99.4038% ( 7) 00:41:24.126 18.110 - 18.204: 99.4901% ( 11) 00:41:24.126 18.204 - 18.299: 99.5136% ( 3) 00:41:24.126 18.299 - 18.394: 99.6078% ( 12) 00:41:24.126 18.394 - 18.489: 99.6705% ( 8) 00:41:24.126 18.489 - 18.584: 99.7333% ( 8) 00:41:24.126 18.679 - 18.773: 99.7882% ( 7) 00:41:24.126 18.868 - 18.963: 99.8274% ( 5) 00:41:24.126 18.963 - 19.058: 99.8353% ( 1) 00:41:24.126 19.058 - 19.153: 99.8588% ( 3) 00:41:24.126 19.153 - 19.247: 99.8823% ( 3) 00:41:24.126 19.437 - 19.532: 99.8902% ( 1) 00:41:24.126 22.187 - 22.281: 99.8980% ( 1) 00:41:24.126 23.514 - 23.609: 99.9059% ( 1) 00:41:24.126 24.083 - 24.178: 99.9137% ( 1) 00:41:24.126 25.979 - 26.169: 99.9216% ( 1) 00:41:24.126 3021.938 - 3034.074: 99.9294% ( 1) 00:41:24.126 3980.705 - 4004.978: 100.0000% ( 9) 00:41:24.126 00:41:24.126 Complete histogram 00:41:24.126 ================== 00:41:24.126 Range in us Cumulative Count 00:41:24.126 2.074 - 2.086: 15.3840% ( 1961) 00:41:24.126 2.086 - 2.098: 43.2808% ( 3556) 00:41:24.126 2.098 - 2.110: 46.0187% ( 349) 00:41:24.126 2.110 - 2.121: 53.5106% ( 955) 00:41:24.126 2.121 - 2.133: 58.1235% ( 588) 00:41:24.126 2.133 - 2.145: 60.0612% ( 247) 00:41:24.126 2.145 - 2.157: 72.1189% ( 1537) 00:41:24.126 2.157 - 2.169: 79.9090% ( 993) 00:41:24.126 2.169 - 2.181: 81.2583% ( 172) 00:41:24.126 2.181 - 2.193: 84.8670% ( 460) 00:41:24.126 2.193 - 2.204: 87.0009% ( 272) 00:41:24.126 2.204 - 2.216: 87.8246% ( 105) 00:41:24.126 2.216 - 2.228: 89.7466% ( 245) 00:41:24.126 2.228 - 2.240: 91.6529% ( 243) 00:41:24.126 2.240 - 2.252: 93.2376% ( 202) 00:41:24.126 2.252 - 2.264: 93.9594% ( 92) 00:41:24.126 2.264 - 2.276: 94.2026% ( 31) 00:41:24.126 2.276 - 2.287: 94.2967% ( 12) 00:41:24.126 2.287 - 2.299: 94.6027% ( 39) 00:41:24.126 2.299 - 2.311: 95.0263% ( 54) 00:41:24.126 2.311 - 2.323: 95.3322% ( 39) 00:41:24.126 2.323 - 2.335: 95.4970% ( 21) 00:41:24.126 2.335 - 2.347: 95.5362% ( 5) 00:41:24.126 2.347 - 2.359: 95.5754% ( 5) 00:41:24.126 2.359 - 2.370: 95.6225% ( 6) 00:41:24.126 2.370 - 2.382: 95.7323% ( 14) 00:41:24.126 2.382 - 2.394: 95.8814% ( 19) 00:41:24.126 2.394 - 2.406: 96.1246% ( 31) 00:41:24.126 2.406 - 2.418: 96.2972% ( 22) 00:41:24.126 2.418 - 2.430: 96.5090% ( 27) 00:41:24.126 2.430 - 2.441: 96.8385% ( 42) 00:41:24.126 2.441 - 2.453: 97.0268% ( 24) 00:41:24.126 2.453 - 2.465: 97.2307% ( 26) 00:41:24.126 2.465 - 2.477: 97.4896% ( 33) 00:41:24.126 2.477 - 2.489: 97.6308% ( 18) 00:41:24.126 2.489 - 2.501: 97.7406% ( 14) 00:41:24.126 2.501 - 2.513: 97.8034% ( 8) 00:41:24.126 2.513 - 2.524: 97.9446% ( 18) 00:41:24.126 2.524 - 2.536: 98.0074% ( 8) 00:41:24.126 2.548 - 2.560: 98.0388% ( 4) 00:41:24.126 2.560 - 2.572: 98.0858% ( 6) 00:41:24.126 2.572 - 2.584: 98.1015% ( 2) 00:41:24.126 2.584 - 2.596: 98.1094% ( 1) 00:41:24.126 2.596 - 2.607: 98.1329% ( 3) 00:41:24.126 2.607 - 2.619: 98.1407% ( 1) 00:41:24.126 2.619 - 2.631: 98.1486% ( 1) 00:41:24.126 2.631 - 2.643: 98.1564% ( 1) 00:41:24.126 2.643 - 2.655: 98.1721% ( 2) 00:41:24.126 2.667 - 2.679: 98.1800% ( 1) 00:41:24.126 2.690 - 2.702: 98.1878% ( 1) 00:41:24.126 2.702 - 2.714: 98.1957% ( 1) 00:41:24.126 2.726 - 2.738: 98.2035% ( 1) 00:41:24.126 2.761 - 2.773: 98.2113% ( 1) 00:41:24.126 2.785 - 2.797: 98.2192% ( 1) 00:41:24.126 2.809 - 2.821: 98.2270% ( 1) 00:41:24.126 2.844 - 2.856: 98.2349% ( 1) 00:41:24.126 2.868 - 2.880: 98.2427% ( 1) 00:41:24.126 2.892 - 2.904: 98.2506% ( 1) 00:41:24.126 2.904 - 2.916: 98.2584% ( 1) 00:41:24.126 2.916 - 2.927: 98.2663% ( 1) 00:41:24.126 2.939 - 2.951: 98.2741% ( 1) 00:41:24.126 2.951 - 2.963: 98.3055% ( 4) 00:41:24.126 2.963 - 2.975: 98.3133% ( 1) 00:41:24.126 2.975 - 2.987: 98.3212% ( 1) 00:41:24.126 3.010 - 3.022: 98.3290% ( 1) 00:41:24.126 3.034 - 3.058: 98.3369% ( 1) 00:41:24.126 3.058 - 3.081: 98.3447% ( 1) 00:41:24.126 3.081 - 3.105: 98.3526% ( 1) 00:41:24.126 3.129 - 3.153: 9[2024-12-09 05:36:18.296090] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:24.126 8.3604% ( 1) 00:41:24.126 3.153 - 3.176: 98.3682% ( 1) 00:41:24.126 3.176 - 3.200: 98.3839% ( 2) 00:41:24.126 3.224 - 3.247: 98.3996% ( 2) 00:41:24.126 3.247 - 3.271: 98.4075% ( 1) 00:41:24.126 3.271 - 3.295: 98.4232% ( 2) 00:41:24.126 3.319 - 3.342: 98.4310% ( 1) 00:41:24.126 3.366 - 3.390: 98.4388% ( 1) 00:41:24.126 3.390 - 3.413: 98.4467% ( 1) 00:41:24.126 3.413 - 3.437: 98.4545% ( 1) 00:41:24.126 3.461 - 3.484: 98.4702% ( 2) 00:41:24.126 3.484 - 3.508: 98.4781% ( 1) 00:41:24.126 3.508 - 3.532: 98.4859% ( 1) 00:41:24.126 3.532 - 3.556: 98.4938% ( 1) 00:41:24.126 3.556 - 3.579: 98.5095% ( 2) 00:41:24.126 3.579 - 3.603: 98.5173% ( 1) 00:41:24.126 3.650 - 3.674: 98.5251% ( 1) 00:41:24.126 3.674 - 3.698: 98.5330% ( 1) 00:41:24.126 3.745 - 3.769: 98.5408% ( 1) 00:41:24.126 3.769 - 3.793: 98.5565% ( 2) 00:41:24.126 3.887 - 3.911: 98.5644% ( 1) 00:41:24.126 3.982 - 4.006: 98.5722% ( 1) 00:41:24.126 4.006 - 4.030: 98.5801% ( 1) 00:41:24.126 4.196 - 4.219: 98.5879% ( 1) 00:41:24.126 5.239 - 5.262: 98.5957% ( 1) 00:41:24.126 5.310 - 5.333: 98.6036% ( 1) 00:41:24.126 5.428 - 5.452: 98.6114% ( 1) 00:41:24.126 5.476 - 5.499: 98.6193% ( 1) 00:41:24.126 5.641 - 5.665: 98.6271% ( 1) 00:41:24.126 6.044 - 6.068: 98.6350% ( 1) 00:41:24.126 6.068 - 6.116: 98.6428% ( 1) 00:41:24.126 6.779 - 6.827: 98.6507% ( 1) 00:41:24.126 6.827 - 6.874: 98.6585% ( 1) 00:41:24.126 7.111 - 7.159: 98.6664% ( 1) 00:41:24.126 7.727 - 7.775: 98.6742% ( 1) 00:41:24.126 7.917 - 7.964: 98.6820% ( 1) 00:41:24.126 7.964 - 8.012: 98.6899% ( 1) 00:41:24.126 8.107 - 8.154: 98.7056% ( 2) 00:41:24.126 8.154 - 8.201: 98.7134% ( 1) 00:41:24.126 8.913 - 8.960: 98.7291% ( 2) 00:41:24.126 12.231 - 12.326: 98.7370% ( 1) 00:41:24.126 15.455 - 15.550: 98.7448% ( 1) 00:41:24.126 15.644 - 15.739: 98.7605% ( 2) 00:41:24.126 15.739 - 15.834: 98.8076% ( 6) 00:41:24.126 15.834 - 15.929: 98.8154% ( 1) 00:41:24.126 15.929 - 16.024: 98.8389% ( 3) 00:41:24.126 16.024 - 16.119: 98.8782% ( 5) 00:41:24.126 16.119 - 16.213: 98.9174% ( 5) 00:41:24.126 16.213 - 16.308: 98.9409% ( 3) 00:41:24.126 16.308 - 16.403: 98.9723% ( 4) 00:41:24.126 16.403 - 16.498: 98.9880% ( 2) 00:41:24.126 16.498 - 16.593: 99.0037% ( 2) 00:41:24.126 16.593 - 16.687: 99.0508% ( 6) 00:41:24.126 16.687 - 16.782: 99.0900% ( 5) 00:41:24.126 16.782 - 16.877: 99.1292% ( 5) 00:41:24.126 16.877 - 16.972: 99.1449% ( 2) 00:41:24.126 16.972 - 17.067: 99.1606% ( 2) 00:41:24.126 17.161 - 17.256: 99.1841% ( 3) 00:41:24.126 17.256 - 17.351: 99.1920% ( 1) 00:41:24.126 17.351 - 17.446: 99.2077% ( 2) 00:41:24.126 17.446 - 17.541: 99.2155% ( 1) 00:41:24.126 17.541 - 17.636: 99.2312% ( 2) 00:41:24.126 17.730 - 17.825: 99.2469% ( 2) 00:41:24.126 17.920 - 18.015: 99.2547% ( 1) 00:41:24.126 18.110 - 18.204: 99.2626% ( 1) 00:41:24.126 29.013 - 29.203: 99.2704% ( 1) 00:41:24.127 3980.705 - 4004.978: 99.7803% ( 65) 00:41:24.127 4004.978 - 4029.250: 99.9922% ( 27) 00:41:24.127 4975.881 - 5000.154: 100.0000% ( 1) 00:41:24.127 00:41:24.127 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:41:24.127 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:41:24.385 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:41:24.385 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:41:24.385 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:41:24.644 [ 00:41:24.644 { 00:41:24.644 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:24.644 "subtype": "Discovery", 00:41:24.644 "listen_addresses": [], 00:41:24.644 "allow_any_host": true, 00:41:24.644 "hosts": [] 00:41:24.644 }, 00:41:24.644 { 00:41:24.644 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:41:24.644 "subtype": "NVMe", 00:41:24.644 "listen_addresses": [ 00:41:24.644 { 00:41:24.644 "trtype": "VFIOUSER", 00:41:24.644 "adrfam": "IPv4", 00:41:24.644 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:41:24.644 "trsvcid": "0" 00:41:24.644 } 00:41:24.644 ], 00:41:24.644 "allow_any_host": true, 00:41:24.644 "hosts": [], 00:41:24.644 "serial_number": "SPDK1", 00:41:24.644 "model_number": "SPDK bdev Controller", 00:41:24.644 "max_namespaces": 32, 00:41:24.644 "min_cntlid": 1, 00:41:24.644 "max_cntlid": 65519, 00:41:24.644 "namespaces": [ 00:41:24.644 { 00:41:24.644 "nsid": 1, 00:41:24.644 "bdev_name": "Malloc1", 00:41:24.644 "name": "Malloc1", 00:41:24.644 "nguid": "A753EF4CB1A846C39CC015D494AE3BF0", 00:41:24.644 "uuid": "a753ef4c-b1a8-46c3-9cc0-15d494ae3bf0" 00:41:24.644 }, 00:41:24.644 { 00:41:24.644 "nsid": 2, 00:41:24.644 "bdev_name": "Malloc3", 00:41:24.644 "name": "Malloc3", 00:41:24.644 "nguid": "64A1CEA959AE46E481B4F87299B97509", 00:41:24.644 "uuid": "64a1cea9-59ae-46e4-81b4-f87299b97509" 00:41:24.644 } 00:41:24.644 ] 00:41:24.644 }, 00:41:24.644 { 00:41:24.644 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:41:24.644 "subtype": "NVMe", 00:41:24.644 "listen_addresses": [ 00:41:24.644 { 00:41:24.644 "trtype": "VFIOUSER", 00:41:24.644 "adrfam": "IPv4", 00:41:24.644 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:41:24.644 "trsvcid": "0" 00:41:24.644 } 00:41:24.644 ], 00:41:24.644 "allow_any_host": true, 00:41:24.644 "hosts": [], 00:41:24.644 "serial_number": "SPDK2", 00:41:24.644 "model_number": "SPDK bdev Controller", 00:41:24.644 "max_namespaces": 32, 00:41:24.644 "min_cntlid": 1, 00:41:24.644 "max_cntlid": 65519, 00:41:24.644 "namespaces": [ 00:41:24.644 { 00:41:24.644 "nsid": 1, 00:41:24.644 "bdev_name": "Malloc2", 00:41:24.644 "name": "Malloc2", 00:41:24.644 "nguid": "94A3144672C44ADABB041D31DCBA901F", 00:41:24.644 "uuid": "94a31446-72c4-4ada-bb04-1d31dcba901f" 00:41:24.644 } 00:41:24.644 ] 00:41:24.644 } 00:41:24.644 ] 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=629193 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:41:24.644 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:41:24.644 [2024-12-09 05:36:18.793794] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:41:24.903 Malloc4 00:41:24.903 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:41:25.160 [2024-12-09 05:36:19.203886] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:41:25.160 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:41:25.160 Asynchronous Event Request test 00:41:25.160 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:41:25.160 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:41:25.160 Registering asynchronous event callbacks... 00:41:25.160 Starting namespace attribute notice tests for all controllers... 00:41:25.160 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:41:25.160 aer_cb - Changed Namespace 00:41:25.160 Cleaning up... 00:41:25.417 [ 00:41:25.417 { 00:41:25.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:25.417 "subtype": "Discovery", 00:41:25.417 "listen_addresses": [], 00:41:25.417 "allow_any_host": true, 00:41:25.417 "hosts": [] 00:41:25.417 }, 00:41:25.417 { 00:41:25.417 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:41:25.417 "subtype": "NVMe", 00:41:25.417 "listen_addresses": [ 00:41:25.417 { 00:41:25.417 "trtype": "VFIOUSER", 00:41:25.417 "adrfam": "IPv4", 00:41:25.417 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:41:25.417 "trsvcid": "0" 00:41:25.417 } 00:41:25.417 ], 00:41:25.417 "allow_any_host": true, 00:41:25.417 "hosts": [], 00:41:25.417 "serial_number": "SPDK1", 00:41:25.417 "model_number": "SPDK bdev Controller", 00:41:25.417 "max_namespaces": 32, 00:41:25.417 "min_cntlid": 1, 00:41:25.417 "max_cntlid": 65519, 00:41:25.417 "namespaces": [ 00:41:25.417 { 00:41:25.417 "nsid": 1, 00:41:25.417 "bdev_name": "Malloc1", 00:41:25.417 "name": "Malloc1", 00:41:25.417 "nguid": "A753EF4CB1A846C39CC015D494AE3BF0", 00:41:25.417 "uuid": "a753ef4c-b1a8-46c3-9cc0-15d494ae3bf0" 00:41:25.417 }, 00:41:25.417 { 00:41:25.417 "nsid": 2, 00:41:25.417 "bdev_name": "Malloc3", 00:41:25.417 "name": "Malloc3", 00:41:25.417 "nguid": "64A1CEA959AE46E481B4F87299B97509", 00:41:25.417 "uuid": "64a1cea9-59ae-46e4-81b4-f87299b97509" 00:41:25.417 } 00:41:25.417 ] 00:41:25.417 }, 00:41:25.417 { 00:41:25.417 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:41:25.417 "subtype": "NVMe", 00:41:25.417 "listen_addresses": [ 00:41:25.417 { 00:41:25.417 "trtype": "VFIOUSER", 00:41:25.417 "adrfam": "IPv4", 00:41:25.417 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:41:25.417 "trsvcid": "0" 00:41:25.418 } 00:41:25.418 ], 00:41:25.418 "allow_any_host": true, 00:41:25.418 "hosts": [], 00:41:25.418 "serial_number": "SPDK2", 00:41:25.418 "model_number": "SPDK bdev Controller", 00:41:25.418 "max_namespaces": 32, 00:41:25.418 "min_cntlid": 1, 00:41:25.418 "max_cntlid": 65519, 00:41:25.418 "namespaces": [ 00:41:25.418 { 00:41:25.418 "nsid": 1, 00:41:25.418 "bdev_name": "Malloc2", 00:41:25.418 "name": "Malloc2", 00:41:25.418 "nguid": "94A3144672C44ADABB041D31DCBA901F", 00:41:25.418 "uuid": "94a31446-72c4-4ada-bb04-1d31dcba901f" 00:41:25.418 }, 00:41:25.418 { 00:41:25.418 "nsid": 2, 00:41:25.418 "bdev_name": "Malloc4", 00:41:25.418 "name": "Malloc4", 00:41:25.418 "nguid": "91852BED63BB41218038C408D83EB3A7", 00:41:25.418 "uuid": "91852bed-63bb-4121-8038-c408d83eb3a7" 00:41:25.418 } 00:41:25.418 ] 00:41:25.418 } 00:41:25.418 ] 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 629193 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 622820 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 622820 ']' 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 622820 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622820 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622820' 00:41:25.418 killing process with pid 622820 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 622820 00:41:25.418 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 622820 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=629377 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 629377' 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:41:25.983 Process pid: 629377 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 629377 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 629377 ']' 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:25.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:25.983 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:41:25.983 [2024-12-09 05:36:19.956342] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:25.983 [2024-12-09 05:36:19.957387] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:41:25.983 [2024-12-09 05:36:19.957447] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:25.983 [2024-12-09 05:36:20.036088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:25.983 [2024-12-09 05:36:20.098498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:25.983 [2024-12-09 05:36:20.098569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:25.983 [2024-12-09 05:36:20.098591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:25.983 [2024-12-09 05:36:20.098602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:25.983 [2024-12-09 05:36:20.098612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:25.983 [2024-12-09 05:36:20.100160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:25.983 [2024-12-09 05:36:20.100221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:25.983 [2024-12-09 05:36:20.100244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:25.983 [2024-12-09 05:36:20.100247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.983 [2024-12-09 05:36:20.190975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:25.983 [2024-12-09 05:36:20.191169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:25.983 [2024-12-09 05:36:20.191445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:25.983 [2024-12-09 05:36:20.192027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:25.984 [2024-12-09 05:36:20.192244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:26.242 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:26.242 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:41:26.242 05:36:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:41:27.175 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:41:27.435 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:41:27.435 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:41:27.435 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:41:27.435 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:41:27.435 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:41:27.695 Malloc1 00:41:27.695 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:41:27.951 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:41:28.209 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:41:28.466 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:41:28.466 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:41:28.466 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:41:28.723 Malloc2 00:41:28.723 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:41:28.980 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:41:29.237 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 629377 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 629377 ']' 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 629377 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.494 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629377 00:41:29.751 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:29.751 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:29.751 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629377' 00:41:29.751 killing process with pid 629377 00:41:29.751 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 629377 00:41:29.751 05:36:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 629377 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:41:30.007 00:41:30.007 real 0m54.772s 00:41:30.007 user 3m31.753s 00:41:30.007 sys 0m3.908s 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:41:30.007 ************************************ 00:41:30.007 END TEST nvmf_vfio_user 00:41:30.007 ************************************ 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:41:30.007 ************************************ 00:41:30.007 START TEST nvmf_vfio_user_nvme_compliance 00:41:30.007 ************************************ 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:41:30.007 * Looking for test storage... 00:41:30.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:30.007 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:41:30.008 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:30.266 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:30.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.267 --rc genhtml_branch_coverage=1 00:41:30.267 --rc genhtml_function_coverage=1 00:41:30.267 --rc genhtml_legend=1 00:41:30.267 --rc geninfo_all_blocks=1 00:41:30.267 --rc geninfo_unexecuted_blocks=1 00:41:30.267 00:41:30.267 ' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:30.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.267 --rc genhtml_branch_coverage=1 00:41:30.267 --rc genhtml_function_coverage=1 00:41:30.267 --rc genhtml_legend=1 00:41:30.267 --rc geninfo_all_blocks=1 00:41:30.267 --rc geninfo_unexecuted_blocks=1 00:41:30.267 00:41:30.267 ' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:30.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.267 --rc genhtml_branch_coverage=1 00:41:30.267 --rc genhtml_function_coverage=1 00:41:30.267 --rc genhtml_legend=1 00:41:30.267 --rc geninfo_all_blocks=1 00:41:30.267 --rc geninfo_unexecuted_blocks=1 00:41:30.267 00:41:30.267 ' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:30.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.267 --rc genhtml_branch_coverage=1 00:41:30.267 --rc genhtml_function_coverage=1 00:41:30.267 --rc genhtml_legend=1 00:41:30.267 --rc geninfo_all_blocks=1 00:41:30.267 --rc geninfo_unexecuted_blocks=1 00:41:30.267 00:41:30.267 ' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.267 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:30.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=629947 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 629947' 00:41:30.268 Process pid: 629947 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 629947 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 629947 ']' 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:30.268 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:30.268 [2024-12-09 05:36:24.330300] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:41:30.268 [2024-12-09 05:36:24.330389] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:30.268 [2024-12-09 05:36:24.400860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:30.268 [2024-12-09 05:36:24.461020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:30.268 [2024-12-09 05:36:24.461085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:30.268 [2024-12-09 05:36:24.461100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:30.268 [2024-12-09 05:36:24.461110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:30.268 [2024-12-09 05:36:24.461133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:30.268 [2024-12-09 05:36:24.462506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.268 [2024-12-09 05:36:24.462561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:30.268 [2024-12-09 05:36:24.462565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.527 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.527 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:41:30.527 05:36:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:31.469 malloc0 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.469 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:41:31.727 00:41:31.727 00:41:31.727 CUnit - A unit testing framework for C - Version 2.1-3 00:41:31.727 http://cunit.sourceforge.net/ 00:41:31.727 00:41:31.727 00:41:31.727 Suite: nvme_compliance 00:41:31.727 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 05:36:25.816788] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:31.727 [2024-12-09 05:36:25.818206] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:41:31.727 [2024-12-09 05:36:25.818230] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:41:31.727 [2024-12-09 05:36:25.818268] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:41:31.727 [2024-12-09 05:36:25.819804] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:31.727 passed 00:41:31.727 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 05:36:25.904397] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:31.727 [2024-12-09 05:36:25.907413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:31.727 passed 00:41:31.986 Test: admin_identify_ns ...[2024-12-09 05:36:25.993811] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:31.986 [2024-12-09 05:36:26.057289] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:41:31.986 [2024-12-09 05:36:26.065300] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:41:31.986 [2024-12-09 05:36:26.086400] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:31.986 passed 00:41:31.986 Test: admin_get_features_mandatory_features ...[2024-12-09 05:36:26.167295] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:31.986 [2024-12-09 05:36:26.171320] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:31.986 passed 00:41:32.243 Test: admin_get_features_optional_features ...[2024-12-09 05:36:26.255896] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.244 [2024-12-09 05:36:26.258919] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.244 passed 00:41:32.244 Test: admin_set_features_number_of_queues ...[2024-12-09 05:36:26.342151] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.244 [2024-12-09 05:36:26.448379] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.501 passed 00:41:32.501 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 05:36:26.529465] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.501 [2024-12-09 05:36:26.535502] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.501 passed 00:41:32.501 Test: admin_get_log_page_with_lpo ...[2024-12-09 05:36:26.616691] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.501 [2024-12-09 05:36:26.684292] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:41:32.501 [2024-12-09 05:36:26.697380] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.759 passed 00:41:32.759 Test: fabric_property_get ...[2024-12-09 05:36:26.783223] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.759 [2024-12-09 05:36:26.784526] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:41:32.759 [2024-12-09 05:36:26.786242] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.759 passed 00:41:32.759 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 05:36:26.869828] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:32.759 [2024-12-09 05:36:26.871117] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:41:32.759 [2024-12-09 05:36:26.872846] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:32.759 passed 00:41:32.759 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 05:36:26.957759] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.017 [2024-12-09 05:36:27.045281] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:41:33.017 [2024-12-09 05:36:27.061279] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:41:33.017 [2024-12-09 05:36:27.066510] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.017 passed 00:41:33.017 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 05:36:27.151885] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.017 [2024-12-09 05:36:27.153236] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:41:33.017 [2024-12-09 05:36:27.154906] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.017 passed 00:41:33.017 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 05:36:27.238430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.274 [2024-12-09 05:36:27.315302] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:41:33.274 [2024-12-09 05:36:27.339285] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:41:33.274 [2024-12-09 05:36:27.344500] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.274 passed 00:41:33.274 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 05:36:27.429729] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.274 [2024-12-09 05:36:27.431064] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:41:33.274 [2024-12-09 05:36:27.431125] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:41:33.274 [2024-12-09 05:36:27.432749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.274 passed 00:41:33.532 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 05:36:27.516481] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.532 [2024-12-09 05:36:27.608281] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:41:33.532 [2024-12-09 05:36:27.616297] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:41:33.532 [2024-12-09 05:36:27.624280] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:41:33.532 [2024-12-09 05:36:27.632296] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:41:33.532 [2024-12-09 05:36:27.664420] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.532 passed 00:41:33.532 Test: admin_create_io_sq_verify_pc ...[2024-12-09 05:36:27.744395] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:33.789 [2024-12-09 05:36:27.762301] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:41:33.789 [2024-12-09 05:36:27.779835] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:33.789 passed 00:41:33.789 Test: admin_create_io_qp_max_qps ...[2024-12-09 05:36:27.863391] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:35.163 [2024-12-09 05:36:28.981293] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:41:35.163 [2024-12-09 05:36:29.358374] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:35.421 passed 00:41:35.421 Test: admin_create_io_sq_shared_cq ...[2024-12-09 05:36:29.445953] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:41:35.421 [2024-12-09 05:36:29.577278] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:41:35.421 [2024-12-09 05:36:29.614370] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:41:35.753 passed 00:41:35.753 00:41:35.753 Run Summary: Type Total Ran Passed Failed Inactive 00:41:35.753 suites 1 1 n/a 0 0 00:41:35.753 tests 18 18 18 0 0 00:41:35.753 asserts 360 360 360 0 n/a 00:41:35.753 00:41:35.753 Elapsed time = 1.576 seconds 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 629947 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 629947 ']' 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 629947 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629947 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629947' 00:41:35.753 killing process with pid 629947 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 629947 00:41:35.753 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 629947 00:41:36.072 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:41:36.072 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:41:36.072 00:41:36.072 real 0m5.862s 00:41:36.072 user 0m16.372s 00:41:36.072 sys 0m0.536s 00:41:36.072 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.072 05:36:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:41:36.072 ************************************ 00:41:36.072 END TEST nvmf_vfio_user_nvme_compliance 00:41:36.072 ************************************ 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:41:36.072 ************************************ 00:41:36.072 START TEST nvmf_vfio_user_fuzz 00:41:36.072 ************************************ 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:41:36.072 * Looking for test storage... 00:41:36.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:36.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.072 --rc genhtml_branch_coverage=1 00:41:36.072 --rc genhtml_function_coverage=1 00:41:36.072 --rc genhtml_legend=1 00:41:36.072 --rc geninfo_all_blocks=1 00:41:36.072 --rc geninfo_unexecuted_blocks=1 00:41:36.072 00:41:36.072 ' 00:41:36.072 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:36.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.072 --rc genhtml_branch_coverage=1 00:41:36.072 --rc genhtml_function_coverage=1 00:41:36.072 --rc genhtml_legend=1 00:41:36.072 --rc geninfo_all_blocks=1 00:41:36.072 --rc geninfo_unexecuted_blocks=1 00:41:36.072 00:41:36.072 ' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:36.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.073 --rc genhtml_branch_coverage=1 00:41:36.073 --rc genhtml_function_coverage=1 00:41:36.073 --rc genhtml_legend=1 00:41:36.073 --rc geninfo_all_blocks=1 00:41:36.073 --rc geninfo_unexecuted_blocks=1 00:41:36.073 00:41:36.073 ' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:36.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.073 --rc genhtml_branch_coverage=1 00:41:36.073 --rc genhtml_function_coverage=1 00:41:36.073 --rc genhtml_legend=1 00:41:36.073 --rc geninfo_all_blocks=1 00:41:36.073 --rc geninfo_unexecuted_blocks=1 00:41:36.073 00:41:36.073 ' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:36.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=630792 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 630792' 00:41:36.073 Process pid: 630792 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 630792 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 630792 ']' 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:36.073 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:36.366 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:36.366 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:41:36.366 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.299 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 malloc0 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:41:37.557 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:42:09.614 Fuzzing completed. Shutting down the fuzz application 00:42:09.614 00:42:09.614 Dumping successful admin opcodes: 00:42:09.614 9, 10, 00:42:09.614 Dumping successful io opcodes: 00:42:09.614 0, 00:42:09.614 NS: 0x20000081ef00 I/O qp, Total commands completed: 625720, total successful commands: 2425, random_seed: 3829481536 00:42:09.614 NS: 0x20000081ef00 admin qp, Total commands completed: 154256, total successful commands: 34, random_seed: 89127296 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 630792 ']' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630792' 00:42:09.614 killing process with pid 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 630792 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:42:09.614 00:42:09.614 real 0m32.346s 00:42:09.614 user 0m30.397s 00:42:09.614 sys 0m29.378s 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:42:09.614 ************************************ 00:42:09.614 END TEST nvmf_vfio_user_fuzz 00:42:09.614 ************************************ 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:42:09.614 ************************************ 00:42:09.614 START TEST nvmf_auth_target 00:42:09.614 ************************************ 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:42:09.614 * Looking for test storage... 00:42:09.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:09.614 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.615 --rc genhtml_branch_coverage=1 00:42:09.615 --rc genhtml_function_coverage=1 00:42:09.615 --rc genhtml_legend=1 00:42:09.615 --rc geninfo_all_blocks=1 00:42:09.615 --rc geninfo_unexecuted_blocks=1 00:42:09.615 00:42:09.615 ' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.615 --rc genhtml_branch_coverage=1 00:42:09.615 --rc genhtml_function_coverage=1 00:42:09.615 --rc genhtml_legend=1 00:42:09.615 --rc geninfo_all_blocks=1 00:42:09.615 --rc geninfo_unexecuted_blocks=1 00:42:09.615 00:42:09.615 ' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.615 --rc genhtml_branch_coverage=1 00:42:09.615 --rc genhtml_function_coverage=1 00:42:09.615 --rc genhtml_legend=1 00:42:09.615 --rc geninfo_all_blocks=1 00:42:09.615 --rc geninfo_unexecuted_blocks=1 00:42:09.615 00:42:09.615 ' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:09.615 --rc genhtml_branch_coverage=1 00:42:09.615 --rc genhtml_function_coverage=1 00:42:09.615 --rc genhtml_legend=1 00:42:09.615 --rc geninfo_all_blocks=1 00:42:09.615 --rc geninfo_unexecuted_blocks=1 00:42:09.615 00:42:09.615 ' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:09.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:09.615 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:10.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:10.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:10.551 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:10.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:10.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:10.552 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:10.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:10.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:42:10.809 00:42:10.809 --- 10.0.0.2 ping statistics --- 00:42:10.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.809 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:42:10.809 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:10.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:10.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:42:10.809 00:42:10.809 --- 10.0.0.1 ping statistics --- 00:42:10.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.810 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=636258 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 636258 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 636258 ']' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:10.810 05:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=636278 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=370c2d0c87687cbff151d8f81df82c34d2e53816aed6776c 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wJe 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 370c2d0c87687cbff151d8f81df82c34d2e53816aed6776c 0 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 370c2d0c87687cbff151d8f81df82c34d2e53816aed6776c 0 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=370c2d0c87687cbff151d8f81df82c34d2e53816aed6776c 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:42:11.068 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wJe 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wJe 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wJe 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52a2acb860fcf7a78dab06e5bcb3dfbcba3f36119d9246ac3c6ca5b3dcdcc70e 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MkE 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52a2acb860fcf7a78dab06e5bcb3dfbcba3f36119d9246ac3c6ca5b3dcdcc70e 3 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52a2acb860fcf7a78dab06e5bcb3dfbcba3f36119d9246ac3c6ca5b3dcdcc70e 3 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.330 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52a2acb860fcf7a78dab06e5bcb3dfbcba3f36119d9246ac3c6ca5b3dcdcc70e 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MkE 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MkE 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.MkE 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bf56e05f9ee0afac9e9a3fae20e85d62 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.seS 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bf56e05f9ee0afac9e9a3fae20e85d62 1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bf56e05f9ee0afac9e9a3fae20e85d62 1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bf56e05f9ee0afac9e9a3fae20e85d62 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.seS 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.seS 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.seS 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52bd8101eab780a03083083efc34f08b6a9e11b3d4d189d6 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.b9b 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52bd8101eab780a03083083efc34f08b6a9e11b3d4d189d6 2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52bd8101eab780a03083083efc34f08b6a9e11b3d4d189d6 2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52bd8101eab780a03083083efc34f08b6a9e11b3d4d189d6 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.b9b 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.b9b 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.b9b 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c92df4c93e61982da5f6fdc88bf429c3ca745364e4059f5d 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.j0e 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c92df4c93e61982da5f6fdc88bf429c3ca745364e4059f5d 2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c92df4c93e61982da5f6fdc88bf429c3ca745364e4059f5d 2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c92df4c93e61982da5f6fdc88bf429c3ca745364e4059f5d 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.j0e 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.j0e 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.j0e 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=747cbb38cf80c4e144a3394bac70c49f 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OSW 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 747cbb38cf80c4e144a3394bac70c49f 1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 747cbb38cf80c4e144a3394bac70c49f 1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=747cbb38cf80c4e144a3394bac70c49f 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OSW 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OSW 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.OSW 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:42:11.331 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2f03b4cd5d6cb5776cb1e098ba62f3b432c0fa1dcde7119557858bc6faab498b 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RxH 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2f03b4cd5d6cb5776cb1e098ba62f3b432c0fa1dcde7119557858bc6faab498b 3 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2f03b4cd5d6cb5776cb1e098ba62f3b432c0fa1dcde7119557858bc6faab498b 3 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2f03b4cd5d6cb5776cb1e098ba62f3b432c0fa1dcde7119557858bc6faab498b 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RxH 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RxH 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.RxH 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 636258 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 636258 ']' 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.607 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 636278 /var/tmp/host.sock 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 636278 ']' 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:42:11.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.864 05:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wJe 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wJe 00:42:12.121 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wJe 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.MkE ]] 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MkE 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MkE 00:42:12.379 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MkE 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.seS 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.seS 00:42:12.636 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.seS 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.b9b ]] 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b9b 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b9b 00:42:12.895 05:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b9b 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.j0e 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.j0e 00:42:13.152 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.j0e 00:42:13.408 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.OSW ]] 00:42:13.408 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OSW 00:42:13.409 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.409 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.409 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.409 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OSW 00:42:13.409 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OSW 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RxH 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RxH 00:42:13.665 05:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RxH 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:13.922 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:14.180 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:14.747 00:42:14.747 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:14.747 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:14.747 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:15.005 { 00:42:15.005 "cntlid": 1, 00:42:15.005 "qid": 0, 00:42:15.005 "state": "enabled", 00:42:15.005 "thread": "nvmf_tgt_poll_group_000", 00:42:15.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:15.005 "listen_address": { 00:42:15.005 "trtype": "TCP", 00:42:15.005 "adrfam": "IPv4", 00:42:15.005 "traddr": "10.0.0.2", 00:42:15.005 "trsvcid": "4420" 00:42:15.005 }, 00:42:15.005 "peer_address": { 00:42:15.005 "trtype": "TCP", 00:42:15.005 "adrfam": "IPv4", 00:42:15.005 "traddr": "10.0.0.1", 00:42:15.005 "trsvcid": "56758" 00:42:15.005 }, 00:42:15.005 "auth": { 00:42:15.005 "state": "completed", 00:42:15.005 "digest": "sha256", 00:42:15.005 "dhgroup": "null" 00:42:15.005 } 00:42:15.005 } 00:42:15.005 ]' 00:42:15.005 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:15.005 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:15.263 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:15.263 05:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:16.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:16.197 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:16.454 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:16.455 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:17.020 00:42:17.020 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:17.020 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:17.020 05:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:17.020 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.020 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:17.020 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.020 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:17.277 { 00:42:17.277 "cntlid": 3, 00:42:17.277 "qid": 0, 00:42:17.277 "state": "enabled", 00:42:17.277 "thread": "nvmf_tgt_poll_group_000", 00:42:17.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:17.277 "listen_address": { 00:42:17.277 "trtype": "TCP", 00:42:17.277 "adrfam": "IPv4", 00:42:17.277 "traddr": "10.0.0.2", 00:42:17.277 "trsvcid": "4420" 00:42:17.277 }, 00:42:17.277 "peer_address": { 00:42:17.277 "trtype": "TCP", 00:42:17.277 "adrfam": "IPv4", 00:42:17.277 "traddr": "10.0.0.1", 00:42:17.277 "trsvcid": "56790" 00:42:17.277 }, 00:42:17.277 "auth": { 00:42:17.277 "state": "completed", 00:42:17.277 "digest": "sha256", 00:42:17.277 "dhgroup": "null" 00:42:17.277 } 00:42:17.277 } 00:42:17.277 ]' 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:17.277 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:17.534 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:17.534 05:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:18.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:18.465 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:18.722 05:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:18.979 00:42:18.979 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:18.979 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:18.979 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:19.237 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:19.237 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:19.237 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.237 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:19.495 { 00:42:19.495 "cntlid": 5, 00:42:19.495 "qid": 0, 00:42:19.495 "state": "enabled", 00:42:19.495 "thread": "nvmf_tgt_poll_group_000", 00:42:19.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:19.495 "listen_address": { 00:42:19.495 "trtype": "TCP", 00:42:19.495 "adrfam": "IPv4", 00:42:19.495 "traddr": "10.0.0.2", 00:42:19.495 "trsvcid": "4420" 00:42:19.495 }, 00:42:19.495 "peer_address": { 00:42:19.495 "trtype": "TCP", 00:42:19.495 "adrfam": "IPv4", 00:42:19.495 "traddr": "10.0.0.1", 00:42:19.495 "trsvcid": "56818" 00:42:19.495 }, 00:42:19.495 "auth": { 00:42:19.495 "state": "completed", 00:42:19.495 "digest": "sha256", 00:42:19.495 "dhgroup": "null" 00:42:19.495 } 00:42:19.495 } 00:42:19.495 ]' 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:19.495 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:19.753 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:19.753 05:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:20.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:20.686 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:20.944 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:21.201 00:42:21.201 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:21.201 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:21.201 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.458 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:21.458 { 00:42:21.458 "cntlid": 7, 00:42:21.458 "qid": 0, 00:42:21.459 "state": "enabled", 00:42:21.459 "thread": "nvmf_tgt_poll_group_000", 00:42:21.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:21.459 "listen_address": { 00:42:21.459 "trtype": "TCP", 00:42:21.459 "adrfam": "IPv4", 00:42:21.459 "traddr": "10.0.0.2", 00:42:21.459 "trsvcid": "4420" 00:42:21.459 }, 00:42:21.459 "peer_address": { 00:42:21.459 "trtype": "TCP", 00:42:21.459 "adrfam": "IPv4", 00:42:21.459 "traddr": "10.0.0.1", 00:42:21.459 "trsvcid": "56840" 00:42:21.459 }, 00:42:21.459 "auth": { 00:42:21.459 "state": "completed", 00:42:21.459 "digest": "sha256", 00:42:21.459 "dhgroup": "null" 00:42:21.459 } 00:42:21.459 } 00:42:21.459 ]' 00:42:21.459 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:21.727 05:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:21.985 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:21.985 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:22.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:22.917 05:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:23.174 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:23.175 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:23.433 00:42:23.433 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:23.433 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:23.433 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:23.690 { 00:42:23.690 "cntlid": 9, 00:42:23.690 "qid": 0, 00:42:23.690 "state": "enabled", 00:42:23.690 "thread": "nvmf_tgt_poll_group_000", 00:42:23.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:23.690 "listen_address": { 00:42:23.690 "trtype": "TCP", 00:42:23.690 "adrfam": "IPv4", 00:42:23.690 "traddr": "10.0.0.2", 00:42:23.690 "trsvcid": "4420" 00:42:23.690 }, 00:42:23.690 "peer_address": { 00:42:23.690 "trtype": "TCP", 00:42:23.690 "adrfam": "IPv4", 00:42:23.690 "traddr": "10.0.0.1", 00:42:23.690 "trsvcid": "36634" 00:42:23.690 }, 00:42:23.690 "auth": { 00:42:23.690 "state": "completed", 00:42:23.690 "digest": "sha256", 00:42:23.690 "dhgroup": "ffdhe2048" 00:42:23.690 } 00:42:23.690 } 00:42:23.690 ]' 00:42:23.690 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:23.947 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:23.947 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:23.947 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:23.947 05:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:23.947 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:23.947 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:23.947 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:24.205 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:24.205 05:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:25.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:25.136 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:25.393 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:25.650 00:42:25.650 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:25.650 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:25.650 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:26.215 { 00:42:26.215 "cntlid": 11, 00:42:26.215 "qid": 0, 00:42:26.215 "state": "enabled", 00:42:26.215 "thread": "nvmf_tgt_poll_group_000", 00:42:26.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:26.215 "listen_address": { 00:42:26.215 "trtype": "TCP", 00:42:26.215 "adrfam": "IPv4", 00:42:26.215 "traddr": "10.0.0.2", 00:42:26.215 "trsvcid": "4420" 00:42:26.215 }, 00:42:26.215 "peer_address": { 00:42:26.215 "trtype": "TCP", 00:42:26.215 "adrfam": "IPv4", 00:42:26.215 "traddr": "10.0.0.1", 00:42:26.215 "trsvcid": "36652" 00:42:26.215 }, 00:42:26.215 "auth": { 00:42:26.215 "state": "completed", 00:42:26.215 "digest": "sha256", 00:42:26.215 "dhgroup": "ffdhe2048" 00:42:26.215 } 00:42:26.215 } 00:42:26.215 ]' 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:26.215 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:26.216 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:26.216 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:26.216 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:26.473 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:26.473 05:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:27.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:27.407 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.665 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.666 05:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.923 00:42:27.923 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:27.923 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:27.923 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:28.181 { 00:42:28.181 "cntlid": 13, 00:42:28.181 "qid": 0, 00:42:28.181 "state": "enabled", 00:42:28.181 "thread": "nvmf_tgt_poll_group_000", 00:42:28.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:28.181 "listen_address": { 00:42:28.181 "trtype": "TCP", 00:42:28.181 "adrfam": "IPv4", 00:42:28.181 "traddr": "10.0.0.2", 00:42:28.181 "trsvcid": "4420" 00:42:28.181 }, 00:42:28.181 "peer_address": { 00:42:28.181 "trtype": "TCP", 00:42:28.181 "adrfam": "IPv4", 00:42:28.181 "traddr": "10.0.0.1", 00:42:28.181 "trsvcid": "36686" 00:42:28.181 }, 00:42:28.181 "auth": { 00:42:28.181 "state": "completed", 00:42:28.181 "digest": "sha256", 00:42:28.181 "dhgroup": "ffdhe2048" 00:42:28.181 } 00:42:28.181 } 00:42:28.181 ]' 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:28.181 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:28.439 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:28.439 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:28.439 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:28.439 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:28.439 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:28.697 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:28.697 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:29.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:29.632 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:29.890 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:30.148 00:42:30.148 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:30.148 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:30.148 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:30.406 { 00:42:30.406 "cntlid": 15, 00:42:30.406 "qid": 0, 00:42:30.406 "state": "enabled", 00:42:30.406 "thread": "nvmf_tgt_poll_group_000", 00:42:30.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:30.406 "listen_address": { 00:42:30.406 "trtype": "TCP", 00:42:30.406 "adrfam": "IPv4", 00:42:30.406 "traddr": "10.0.0.2", 00:42:30.406 "trsvcid": "4420" 00:42:30.406 }, 00:42:30.406 "peer_address": { 00:42:30.406 "trtype": "TCP", 00:42:30.406 "adrfam": "IPv4", 00:42:30.406 "traddr": "10.0.0.1", 00:42:30.406 "trsvcid": "36722" 00:42:30.406 }, 00:42:30.406 "auth": { 00:42:30.406 "state": "completed", 00:42:30.406 "digest": "sha256", 00:42:30.406 "dhgroup": "ffdhe2048" 00:42:30.406 } 00:42:30.406 } 00:42:30.406 ]' 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:30.406 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:30.665 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:30.665 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:30.665 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:30.665 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:30.665 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:30.924 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:30.924 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:31.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:31.863 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:32.120 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:32.378 00:42:32.378 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:32.378 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:32.378 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:32.635 { 00:42:32.635 "cntlid": 17, 00:42:32.635 "qid": 0, 00:42:32.635 "state": "enabled", 00:42:32.635 "thread": "nvmf_tgt_poll_group_000", 00:42:32.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:32.635 "listen_address": { 00:42:32.635 "trtype": "TCP", 00:42:32.635 "adrfam": "IPv4", 00:42:32.635 "traddr": "10.0.0.2", 00:42:32.635 "trsvcid": "4420" 00:42:32.635 }, 00:42:32.635 "peer_address": { 00:42:32.635 "trtype": "TCP", 00:42:32.635 "adrfam": "IPv4", 00:42:32.635 "traddr": "10.0.0.1", 00:42:32.635 "trsvcid": "39688" 00:42:32.635 }, 00:42:32.635 "auth": { 00:42:32.635 "state": "completed", 00:42:32.635 "digest": "sha256", 00:42:32.635 "dhgroup": "ffdhe3072" 00:42:32.635 } 00:42:32.635 } 00:42:32.635 ]' 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:32.635 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:32.893 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:32.893 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:32.893 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:32.893 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:32.893 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:33.150 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:33.150 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:34.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:34.080 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:34.337 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:34.593 00:42:34.593 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:34.593 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:34.593 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:34.851 { 00:42:34.851 "cntlid": 19, 00:42:34.851 "qid": 0, 00:42:34.851 "state": "enabled", 00:42:34.851 "thread": "nvmf_tgt_poll_group_000", 00:42:34.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:34.851 "listen_address": { 00:42:34.851 "trtype": "TCP", 00:42:34.851 "adrfam": "IPv4", 00:42:34.851 "traddr": "10.0.0.2", 00:42:34.851 "trsvcid": "4420" 00:42:34.851 }, 00:42:34.851 "peer_address": { 00:42:34.851 "trtype": "TCP", 00:42:34.851 "adrfam": "IPv4", 00:42:34.851 "traddr": "10.0.0.1", 00:42:34.851 "trsvcid": "39708" 00:42:34.851 }, 00:42:34.851 "auth": { 00:42:34.851 "state": "completed", 00:42:34.851 "digest": "sha256", 00:42:34.851 "dhgroup": "ffdhe3072" 00:42:34.851 } 00:42:34.851 } 00:42:34.851 ]' 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:34.851 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:35.108 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:35.108 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:35.108 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:35.108 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:35.108 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:35.367 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:35.367 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:36.297 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:36.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:36.298 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.554 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:36.555 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:36.555 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:36.812 00:42:36.812 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:36.812 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:36.812 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:37.069 { 00:42:37.069 "cntlid": 21, 00:42:37.069 "qid": 0, 00:42:37.069 "state": "enabled", 00:42:37.069 "thread": "nvmf_tgt_poll_group_000", 00:42:37.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:37.069 "listen_address": { 00:42:37.069 "trtype": "TCP", 00:42:37.069 "adrfam": "IPv4", 00:42:37.069 "traddr": "10.0.0.2", 00:42:37.069 "trsvcid": "4420" 00:42:37.069 }, 00:42:37.069 "peer_address": { 00:42:37.069 "trtype": "TCP", 00:42:37.069 "adrfam": "IPv4", 00:42:37.069 "traddr": "10.0.0.1", 00:42:37.069 "trsvcid": "39726" 00:42:37.069 }, 00:42:37.069 "auth": { 00:42:37.069 "state": "completed", 00:42:37.069 "digest": "sha256", 00:42:37.069 "dhgroup": "ffdhe3072" 00:42:37.069 } 00:42:37.069 } 00:42:37.069 ]' 00:42:37.069 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:37.326 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:37.584 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:37.584 05:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:38.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:38.516 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:38.773 05:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:39.032 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.323 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:39.636 { 00:42:39.636 "cntlid": 23, 00:42:39.636 "qid": 0, 00:42:39.636 "state": "enabled", 00:42:39.636 "thread": "nvmf_tgt_poll_group_000", 00:42:39.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:39.636 "listen_address": { 00:42:39.636 "trtype": "TCP", 00:42:39.636 "adrfam": "IPv4", 00:42:39.636 "traddr": "10.0.0.2", 00:42:39.636 "trsvcid": "4420" 00:42:39.636 }, 00:42:39.636 "peer_address": { 00:42:39.636 "trtype": "TCP", 00:42:39.636 "adrfam": "IPv4", 00:42:39.636 "traddr": "10.0.0.1", 00:42:39.636 "trsvcid": "39758" 00:42:39.636 }, 00:42:39.636 "auth": { 00:42:39.636 "state": "completed", 00:42:39.636 "digest": "sha256", 00:42:39.636 "dhgroup": "ffdhe3072" 00:42:39.636 } 00:42:39.636 } 00:42:39.636 ]' 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:39.636 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:39.894 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:39.894 05:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:40.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:40.827 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:41.091 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:41.350 00:42:41.350 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:41.350 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:41.350 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.608 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:41.608 { 00:42:41.608 "cntlid": 25, 00:42:41.608 "qid": 0, 00:42:41.608 "state": "enabled", 00:42:41.608 "thread": "nvmf_tgt_poll_group_000", 00:42:41.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:41.608 "listen_address": { 00:42:41.608 "trtype": "TCP", 00:42:41.608 "adrfam": "IPv4", 00:42:41.608 "traddr": "10.0.0.2", 00:42:41.608 "trsvcid": "4420" 00:42:41.608 }, 00:42:41.608 "peer_address": { 00:42:41.608 "trtype": "TCP", 00:42:41.608 "adrfam": "IPv4", 00:42:41.608 "traddr": "10.0.0.1", 00:42:41.608 "trsvcid": "39790" 00:42:41.608 }, 00:42:41.608 "auth": { 00:42:41.608 "state": "completed", 00:42:41.608 "digest": "sha256", 00:42:41.609 "dhgroup": "ffdhe4096" 00:42:41.609 } 00:42:41.609 } 00:42:41.609 ]' 00:42:41.609 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:41.609 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:41.609 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:41.866 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:41.866 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:41.866 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:41.866 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:41.866 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:42.124 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:42.124 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:43.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:43.057 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:43.314 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:43.572 00:42:43.572 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:43.572 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:43.572 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:43.828 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:43.828 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:43.828 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:43.828 05:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:43.828 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:43.828 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:43.828 { 00:42:43.828 "cntlid": 27, 00:42:43.828 "qid": 0, 00:42:43.828 "state": "enabled", 00:42:43.828 "thread": "nvmf_tgt_poll_group_000", 00:42:43.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:43.828 "listen_address": { 00:42:43.828 "trtype": "TCP", 00:42:43.828 "adrfam": "IPv4", 00:42:43.828 "traddr": "10.0.0.2", 00:42:43.828 "trsvcid": "4420" 00:42:43.828 }, 00:42:43.828 "peer_address": { 00:42:43.828 "trtype": "TCP", 00:42:43.828 "adrfam": "IPv4", 00:42:43.829 "traddr": "10.0.0.1", 00:42:43.829 "trsvcid": "32934" 00:42:43.829 }, 00:42:43.829 "auth": { 00:42:43.829 "state": "completed", 00:42:43.829 "digest": "sha256", 00:42:43.829 "dhgroup": "ffdhe4096" 00:42:43.829 } 00:42:43.829 } 00:42:43.829 ]' 00:42:43.829 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:43.829 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:43.829 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:44.086 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:44.086 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:44.086 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:44.086 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:44.086 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:44.342 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:44.342 05:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:45.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:45.290 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:45.548 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:45.806 00:42:45.806 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:45.806 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:45.806 05:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:46.063 { 00:42:46.063 "cntlid": 29, 00:42:46.063 "qid": 0, 00:42:46.063 "state": "enabled", 00:42:46.063 "thread": "nvmf_tgt_poll_group_000", 00:42:46.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:46.063 "listen_address": { 00:42:46.063 "trtype": "TCP", 00:42:46.063 "adrfam": "IPv4", 00:42:46.063 "traddr": "10.0.0.2", 00:42:46.063 "trsvcid": "4420" 00:42:46.063 }, 00:42:46.063 "peer_address": { 00:42:46.063 "trtype": "TCP", 00:42:46.063 "adrfam": "IPv4", 00:42:46.063 "traddr": "10.0.0.1", 00:42:46.063 "trsvcid": "32942" 00:42:46.063 }, 00:42:46.063 "auth": { 00:42:46.063 "state": "completed", 00:42:46.063 "digest": "sha256", 00:42:46.063 "dhgroup": "ffdhe4096" 00:42:46.063 } 00:42:46.063 } 00:42:46.063 ]' 00:42:46.063 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:46.321 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:46.579 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:46.579 05:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:47.511 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:47.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:47.511 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:47.511 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.512 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:47.512 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.512 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:47.512 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:47.512 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:47.769 05:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:48.335 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:48.335 { 00:42:48.335 "cntlid": 31, 00:42:48.335 "qid": 0, 00:42:48.335 "state": "enabled", 00:42:48.335 "thread": "nvmf_tgt_poll_group_000", 00:42:48.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:48.335 "listen_address": { 00:42:48.335 "trtype": "TCP", 00:42:48.335 "adrfam": "IPv4", 00:42:48.335 "traddr": "10.0.0.2", 00:42:48.335 "trsvcid": "4420" 00:42:48.335 }, 00:42:48.335 "peer_address": { 00:42:48.335 "trtype": "TCP", 00:42:48.335 "adrfam": "IPv4", 00:42:48.335 "traddr": "10.0.0.1", 00:42:48.335 "trsvcid": "32976" 00:42:48.335 }, 00:42:48.335 "auth": { 00:42:48.335 "state": "completed", 00:42:48.335 "digest": "sha256", 00:42:48.335 "dhgroup": "ffdhe4096" 00:42:48.335 } 00:42:48.335 } 00:42:48.335 ]' 00:42:48.335 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:48.593 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:48.850 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:48.850 05:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:49.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:49.784 05:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:50.042 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:50.607 00:42:50.607 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:50.607 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:50.607 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:50.866 { 00:42:50.866 "cntlid": 33, 00:42:50.866 "qid": 0, 00:42:50.866 "state": "enabled", 00:42:50.866 "thread": "nvmf_tgt_poll_group_000", 00:42:50.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:50.866 "listen_address": { 00:42:50.866 "trtype": "TCP", 00:42:50.866 "adrfam": "IPv4", 00:42:50.866 "traddr": "10.0.0.2", 00:42:50.866 "trsvcid": "4420" 00:42:50.866 }, 00:42:50.866 "peer_address": { 00:42:50.866 "trtype": "TCP", 00:42:50.866 "adrfam": "IPv4", 00:42:50.866 "traddr": "10.0.0.1", 00:42:50.866 "trsvcid": "33008" 00:42:50.866 }, 00:42:50.866 "auth": { 00:42:50.866 "state": "completed", 00:42:50.866 "digest": "sha256", 00:42:50.866 "dhgroup": "ffdhe6144" 00:42:50.866 } 00:42:50.866 } 00:42:50.866 ]' 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:50.866 05:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:50.866 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:50.866 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:50.866 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:51.122 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:51.122 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:52.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:52.051 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:52.309 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:52.873 00:42:52.873 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:52.873 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:52.873 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:53.129 { 00:42:53.129 "cntlid": 35, 00:42:53.129 "qid": 0, 00:42:53.129 "state": "enabled", 00:42:53.129 "thread": "nvmf_tgt_poll_group_000", 00:42:53.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:53.129 "listen_address": { 00:42:53.129 "trtype": "TCP", 00:42:53.129 "adrfam": "IPv4", 00:42:53.129 "traddr": "10.0.0.2", 00:42:53.129 "trsvcid": "4420" 00:42:53.129 }, 00:42:53.129 "peer_address": { 00:42:53.129 "trtype": "TCP", 00:42:53.129 "adrfam": "IPv4", 00:42:53.129 "traddr": "10.0.0.1", 00:42:53.129 "trsvcid": "60782" 00:42:53.129 }, 00:42:53.129 "auth": { 00:42:53.129 "state": "completed", 00:42:53.129 "digest": "sha256", 00:42:53.129 "dhgroup": "ffdhe6144" 00:42:53.129 } 00:42:53.129 } 00:42:53.129 ]' 00:42:53.129 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:53.387 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:53.645 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:53.645 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:54.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:54.614 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:54.870 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:55.435 00:42:55.435 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:55.435 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:55.435 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:55.692 { 00:42:55.692 "cntlid": 37, 00:42:55.692 "qid": 0, 00:42:55.692 "state": "enabled", 00:42:55.692 "thread": "nvmf_tgt_poll_group_000", 00:42:55.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:55.692 "listen_address": { 00:42:55.692 "trtype": "TCP", 00:42:55.692 "adrfam": "IPv4", 00:42:55.692 "traddr": "10.0.0.2", 00:42:55.692 "trsvcid": "4420" 00:42:55.692 }, 00:42:55.692 "peer_address": { 00:42:55.692 "trtype": "TCP", 00:42:55.692 "adrfam": "IPv4", 00:42:55.692 "traddr": "10.0.0.1", 00:42:55.692 "trsvcid": "60822" 00:42:55.692 }, 00:42:55.692 "auth": { 00:42:55.692 "state": "completed", 00:42:55.692 "digest": "sha256", 00:42:55.692 "dhgroup": "ffdhe6144" 00:42:55.692 } 00:42:55.692 } 00:42:55.692 ]' 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:55.692 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:55.950 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:55.950 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:56.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:56.883 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:57.141 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:42:57.706 00:42:57.706 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:42:57.706 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:42:57.706 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:42:57.964 { 00:42:57.964 "cntlid": 39, 00:42:57.964 "qid": 0, 00:42:57.964 "state": "enabled", 00:42:57.964 "thread": "nvmf_tgt_poll_group_000", 00:42:57.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:42:57.964 "listen_address": { 00:42:57.964 "trtype": "TCP", 00:42:57.964 "adrfam": "IPv4", 00:42:57.964 "traddr": "10.0.0.2", 00:42:57.964 "trsvcid": "4420" 00:42:57.964 }, 00:42:57.964 "peer_address": { 00:42:57.964 "trtype": "TCP", 00:42:57.964 "adrfam": "IPv4", 00:42:57.964 "traddr": "10.0.0.1", 00:42:57.964 "trsvcid": "60856" 00:42:57.964 }, 00:42:57.964 "auth": { 00:42:57.964 "state": "completed", 00:42:57.964 "digest": "sha256", 00:42:57.964 "dhgroup": "ffdhe6144" 00:42:57.964 } 00:42:57.964 } 00:42:57.964 ]' 00:42:57.964 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:58.222 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:58.480 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:58.480 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:59.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:59.413 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:59.671 05:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:00.604 00:43:00.605 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:00.605 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:00.605 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:00.862 { 00:43:00.862 "cntlid": 41, 00:43:00.862 "qid": 0, 00:43:00.862 "state": "enabled", 00:43:00.862 "thread": "nvmf_tgt_poll_group_000", 00:43:00.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:00.862 "listen_address": { 00:43:00.862 "trtype": "TCP", 00:43:00.862 "adrfam": "IPv4", 00:43:00.862 "traddr": "10.0.0.2", 00:43:00.862 "trsvcid": "4420" 00:43:00.862 }, 00:43:00.862 "peer_address": { 00:43:00.862 "trtype": "TCP", 00:43:00.862 "adrfam": "IPv4", 00:43:00.862 "traddr": "10.0.0.1", 00:43:00.862 "trsvcid": "60898" 00:43:00.862 }, 00:43:00.862 "auth": { 00:43:00.862 "state": "completed", 00:43:00.862 "digest": "sha256", 00:43:00.862 "dhgroup": "ffdhe8192" 00:43:00.862 } 00:43:00.862 } 00:43:00.862 ]' 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:00.862 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:00.863 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:00.863 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:00.863 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:00.863 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:00.863 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:01.120 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:01.120 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:02.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:02.052 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:02.308 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:02.309 05:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:03.239 00:43:03.239 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:03.239 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:03.239 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:03.496 { 00:43:03.496 "cntlid": 43, 00:43:03.496 "qid": 0, 00:43:03.496 "state": "enabled", 00:43:03.496 "thread": "nvmf_tgt_poll_group_000", 00:43:03.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:03.496 "listen_address": { 00:43:03.496 "trtype": "TCP", 00:43:03.496 "adrfam": "IPv4", 00:43:03.496 "traddr": "10.0.0.2", 00:43:03.496 "trsvcid": "4420" 00:43:03.496 }, 00:43:03.496 "peer_address": { 00:43:03.496 "trtype": "TCP", 00:43:03.496 "adrfam": "IPv4", 00:43:03.496 "traddr": "10.0.0.1", 00:43:03.496 "trsvcid": "50130" 00:43:03.496 }, 00:43:03.496 "auth": { 00:43:03.496 "state": "completed", 00:43:03.496 "digest": "sha256", 00:43:03.496 "dhgroup": "ffdhe8192" 00:43:03.496 } 00:43:03.496 } 00:43:03.496 ]' 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:03.496 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:03.754 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:03.754 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:03.754 05:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:04.011 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:04.011 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:04.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:04.943 05:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:05.201 05:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:06.150 00:43:06.150 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:06.150 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:06.150 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:06.408 { 00:43:06.408 "cntlid": 45, 00:43:06.408 "qid": 0, 00:43:06.408 "state": "enabled", 00:43:06.408 "thread": "nvmf_tgt_poll_group_000", 00:43:06.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:06.408 "listen_address": { 00:43:06.408 "trtype": "TCP", 00:43:06.408 "adrfam": "IPv4", 00:43:06.408 "traddr": "10.0.0.2", 00:43:06.408 "trsvcid": "4420" 00:43:06.408 }, 00:43:06.408 "peer_address": { 00:43:06.408 "trtype": "TCP", 00:43:06.408 "adrfam": "IPv4", 00:43:06.408 "traddr": "10.0.0.1", 00:43:06.408 "trsvcid": "50158" 00:43:06.408 }, 00:43:06.408 "auth": { 00:43:06.408 "state": "completed", 00:43:06.408 "digest": "sha256", 00:43:06.408 "dhgroup": "ffdhe8192" 00:43:06.408 } 00:43:06.408 } 00:43:06.408 ]' 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:06.408 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:06.972 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:06.972 05:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:07.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:07.905 05:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:07.905 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:08.836 00:43:08.836 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:08.836 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:08.836 05:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:09.094 { 00:43:09.094 "cntlid": 47, 00:43:09.094 "qid": 0, 00:43:09.094 "state": "enabled", 00:43:09.094 "thread": "nvmf_tgt_poll_group_000", 00:43:09.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:09.094 "listen_address": { 00:43:09.094 "trtype": "TCP", 00:43:09.094 "adrfam": "IPv4", 00:43:09.094 "traddr": "10.0.0.2", 00:43:09.094 "trsvcid": "4420" 00:43:09.094 }, 00:43:09.094 "peer_address": { 00:43:09.094 "trtype": "TCP", 00:43:09.094 "adrfam": "IPv4", 00:43:09.094 "traddr": "10.0.0.1", 00:43:09.094 "trsvcid": "50192" 00:43:09.094 }, 00:43:09.094 "auth": { 00:43:09.094 "state": "completed", 00:43:09.094 "digest": "sha256", 00:43:09.094 "dhgroup": "ffdhe8192" 00:43:09.094 } 00:43:09.094 } 00:43:09.094 ]' 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:09.094 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:09.369 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:09.369 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:09.369 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:09.627 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:09.627 05:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:10.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:10.560 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.817 05:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:11.085 00:43:11.085 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:11.085 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:11.085 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:11.354 { 00:43:11.354 "cntlid": 49, 00:43:11.354 "qid": 0, 00:43:11.354 "state": "enabled", 00:43:11.354 "thread": "nvmf_tgt_poll_group_000", 00:43:11.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:11.354 "listen_address": { 00:43:11.354 "trtype": "TCP", 00:43:11.354 "adrfam": "IPv4", 00:43:11.354 "traddr": "10.0.0.2", 00:43:11.354 "trsvcid": "4420" 00:43:11.354 }, 00:43:11.354 "peer_address": { 00:43:11.354 "trtype": "TCP", 00:43:11.354 "adrfam": "IPv4", 00:43:11.354 "traddr": "10.0.0.1", 00:43:11.354 "trsvcid": "50226" 00:43:11.354 }, 00:43:11.354 "auth": { 00:43:11.354 "state": "completed", 00:43:11.354 "digest": "sha384", 00:43:11.354 "dhgroup": "null" 00:43:11.354 } 00:43:11.354 } 00:43:11.354 ]' 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:11.354 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:11.613 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:11.613 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:11.613 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:11.871 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:11.871 05:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:12.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:12.812 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.070 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:13.328 00:43:13.328 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:13.328 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:13.328 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:13.585 { 00:43:13.585 "cntlid": 51, 00:43:13.585 "qid": 0, 00:43:13.585 "state": "enabled", 00:43:13.585 "thread": "nvmf_tgt_poll_group_000", 00:43:13.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:13.585 "listen_address": { 00:43:13.585 "trtype": "TCP", 00:43:13.585 "adrfam": "IPv4", 00:43:13.585 "traddr": "10.0.0.2", 00:43:13.585 "trsvcid": "4420" 00:43:13.585 }, 00:43:13.585 "peer_address": { 00:43:13.585 "trtype": "TCP", 00:43:13.585 "adrfam": "IPv4", 00:43:13.585 "traddr": "10.0.0.1", 00:43:13.585 "trsvcid": "58562" 00:43:13.585 }, 00:43:13.585 "auth": { 00:43:13.585 "state": "completed", 00:43:13.585 "digest": "sha384", 00:43:13.585 "dhgroup": "null" 00:43:13.585 } 00:43:13.585 } 00:43:13.585 ]' 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:13.585 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:13.843 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:13.843 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:14.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:14.775 05:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.036 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:15.294 00:43:15.552 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:15.552 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:15.552 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:15.809 { 00:43:15.809 "cntlid": 53, 00:43:15.809 "qid": 0, 00:43:15.809 "state": "enabled", 00:43:15.809 "thread": "nvmf_tgt_poll_group_000", 00:43:15.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:15.809 "listen_address": { 00:43:15.809 "trtype": "TCP", 00:43:15.809 "adrfam": "IPv4", 00:43:15.809 "traddr": "10.0.0.2", 00:43:15.809 "trsvcid": "4420" 00:43:15.809 }, 00:43:15.809 "peer_address": { 00:43:15.809 "trtype": "TCP", 00:43:15.809 "adrfam": "IPv4", 00:43:15.809 "traddr": "10.0.0.1", 00:43:15.809 "trsvcid": "58596" 00:43:15.809 }, 00:43:15.809 "auth": { 00:43:15.809 "state": "completed", 00:43:15.809 "digest": "sha384", 00:43:15.809 "dhgroup": "null" 00:43:15.809 } 00:43:15.809 } 00:43:15.809 ]' 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:15.809 05:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:16.066 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:16.066 05:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:17.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:17.029 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:17.286 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:17.543 00:43:17.806 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:17.806 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:17.806 05:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:18.064 { 00:43:18.064 "cntlid": 55, 00:43:18.064 "qid": 0, 00:43:18.064 "state": "enabled", 00:43:18.064 "thread": "nvmf_tgt_poll_group_000", 00:43:18.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:18.064 "listen_address": { 00:43:18.064 "trtype": "TCP", 00:43:18.064 "adrfam": "IPv4", 00:43:18.064 "traddr": "10.0.0.2", 00:43:18.064 "trsvcid": "4420" 00:43:18.064 }, 00:43:18.064 "peer_address": { 00:43:18.064 "trtype": "TCP", 00:43:18.064 "adrfam": "IPv4", 00:43:18.064 "traddr": "10.0.0.1", 00:43:18.064 "trsvcid": "58620" 00:43:18.064 }, 00:43:18.064 "auth": { 00:43:18.064 "state": "completed", 00:43:18.064 "digest": "sha384", 00:43:18.064 "dhgroup": "null" 00:43:18.064 } 00:43:18.064 } 00:43:18.064 ]' 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:18.064 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:18.322 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:18.322 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:19.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:19.255 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.513 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:19.770 00:43:19.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:19.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:19.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:20.028 { 00:43:20.028 "cntlid": 57, 00:43:20.028 "qid": 0, 00:43:20.028 "state": "enabled", 00:43:20.028 "thread": "nvmf_tgt_poll_group_000", 00:43:20.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:20.028 "listen_address": { 00:43:20.028 "trtype": "TCP", 00:43:20.028 "adrfam": "IPv4", 00:43:20.028 "traddr": "10.0.0.2", 00:43:20.028 "trsvcid": "4420" 00:43:20.028 }, 00:43:20.028 "peer_address": { 00:43:20.028 "trtype": "TCP", 00:43:20.028 "adrfam": "IPv4", 00:43:20.028 "traddr": "10.0.0.1", 00:43:20.028 "trsvcid": "58646" 00:43:20.028 }, 00:43:20.028 "auth": { 00:43:20.028 "state": "completed", 00:43:20.028 "digest": "sha384", 00:43:20.028 "dhgroup": "ffdhe2048" 00:43:20.028 } 00:43:20.028 } 00:43:20.028 ]' 00:43:20.028 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:20.286 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:20.570 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:20.570 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:21.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:21.537 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:21.794 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:22.051 00:43:22.051 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:22.051 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:22.051 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:22.308 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:22.308 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:22.308 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.308 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:22.565 { 00:43:22.565 "cntlid": 59, 00:43:22.565 "qid": 0, 00:43:22.565 "state": "enabled", 00:43:22.565 "thread": "nvmf_tgt_poll_group_000", 00:43:22.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:22.565 "listen_address": { 00:43:22.565 "trtype": "TCP", 00:43:22.565 "adrfam": "IPv4", 00:43:22.565 "traddr": "10.0.0.2", 00:43:22.565 "trsvcid": "4420" 00:43:22.565 }, 00:43:22.565 "peer_address": { 00:43:22.565 "trtype": "TCP", 00:43:22.565 "adrfam": "IPv4", 00:43:22.565 "traddr": "10.0.0.1", 00:43:22.565 "trsvcid": "52406" 00:43:22.565 }, 00:43:22.565 "auth": { 00:43:22.565 "state": "completed", 00:43:22.565 "digest": "sha384", 00:43:22.565 "dhgroup": "ffdhe2048" 00:43:22.565 } 00:43:22.565 } 00:43:22.565 ]' 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:22.565 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:22.823 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:22.823 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:23.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:23.756 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.014 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:24.272 00:43:24.272 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:24.272 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:24.272 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:24.530 { 00:43:24.530 "cntlid": 61, 00:43:24.530 "qid": 0, 00:43:24.530 "state": "enabled", 00:43:24.530 "thread": "nvmf_tgt_poll_group_000", 00:43:24.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:24.530 "listen_address": { 00:43:24.530 "trtype": "TCP", 00:43:24.530 "adrfam": "IPv4", 00:43:24.530 "traddr": "10.0.0.2", 00:43:24.530 "trsvcid": "4420" 00:43:24.530 }, 00:43:24.530 "peer_address": { 00:43:24.530 "trtype": "TCP", 00:43:24.530 "adrfam": "IPv4", 00:43:24.530 "traddr": "10.0.0.1", 00:43:24.530 "trsvcid": "52434" 00:43:24.530 }, 00:43:24.530 "auth": { 00:43:24.530 "state": "completed", 00:43:24.530 "digest": "sha384", 00:43:24.530 "dhgroup": "ffdhe2048" 00:43:24.530 } 00:43:24.530 } 00:43:24.530 ]' 00:43:24.530 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:24.787 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:25.044 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:25.044 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:25.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:25.977 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:26.235 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:26.493 00:43:26.493 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:26.493 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:26.493 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:26.750 { 00:43:26.750 "cntlid": 63, 00:43:26.750 "qid": 0, 00:43:26.750 "state": "enabled", 00:43:26.750 "thread": "nvmf_tgt_poll_group_000", 00:43:26.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:26.750 "listen_address": { 00:43:26.750 "trtype": "TCP", 00:43:26.750 "adrfam": "IPv4", 00:43:26.750 "traddr": "10.0.0.2", 00:43:26.750 "trsvcid": "4420" 00:43:26.750 }, 00:43:26.750 "peer_address": { 00:43:26.750 "trtype": "TCP", 00:43:26.750 "adrfam": "IPv4", 00:43:26.750 "traddr": "10.0.0.1", 00:43:26.750 "trsvcid": "52456" 00:43:26.750 }, 00:43:26.750 "auth": { 00:43:26.750 "state": "completed", 00:43:26.750 "digest": "sha384", 00:43:26.750 "dhgroup": "ffdhe2048" 00:43:26.750 } 00:43:26.750 } 00:43:26.750 ]' 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:26.750 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:27.008 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:27.008 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:27.008 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:27.008 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:27.008 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:27.265 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:27.265 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:28.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:28.196 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.453 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:28.710 00:43:28.710 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:28.710 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:28.710 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.966 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:28.966 { 00:43:28.966 "cntlid": 65, 00:43:28.967 "qid": 0, 00:43:28.967 "state": "enabled", 00:43:28.967 "thread": "nvmf_tgt_poll_group_000", 00:43:28.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:28.967 "listen_address": { 00:43:28.967 "trtype": "TCP", 00:43:28.967 "adrfam": "IPv4", 00:43:28.967 "traddr": "10.0.0.2", 00:43:28.967 "trsvcid": "4420" 00:43:28.967 }, 00:43:28.967 "peer_address": { 00:43:28.967 "trtype": "TCP", 00:43:28.967 "adrfam": "IPv4", 00:43:28.967 "traddr": "10.0.0.1", 00:43:28.967 "trsvcid": "52488" 00:43:28.967 }, 00:43:28.967 "auth": { 00:43:28.967 "state": "completed", 00:43:28.967 "digest": "sha384", 00:43:28.967 "dhgroup": "ffdhe3072" 00:43:28.967 } 00:43:28.967 } 00:43:28.967 ]' 00:43:28.967 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:28.967 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:28.967 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:29.223 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:29.223 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:29.223 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:29.223 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:29.223 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:29.480 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:29.480 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:30.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:30.411 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.667 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:30.934 00:43:30.934 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:30.934 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:30.934 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:31.193 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:31.194 { 00:43:31.194 "cntlid": 67, 00:43:31.194 "qid": 0, 00:43:31.194 "state": "enabled", 00:43:31.194 "thread": "nvmf_tgt_poll_group_000", 00:43:31.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:31.194 "listen_address": { 00:43:31.194 "trtype": "TCP", 00:43:31.194 "adrfam": "IPv4", 00:43:31.194 "traddr": "10.0.0.2", 00:43:31.194 "trsvcid": "4420" 00:43:31.194 }, 00:43:31.194 "peer_address": { 00:43:31.194 "trtype": "TCP", 00:43:31.194 "adrfam": "IPv4", 00:43:31.194 "traddr": "10.0.0.1", 00:43:31.194 "trsvcid": "52516" 00:43:31.194 }, 00:43:31.194 "auth": { 00:43:31.194 "state": "completed", 00:43:31.194 "digest": "sha384", 00:43:31.194 "dhgroup": "ffdhe3072" 00:43:31.194 } 00:43:31.194 } 00:43:31.194 ]' 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:31.194 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:31.450 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:31.450 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:31.450 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:31.707 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:31.707 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:32.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:32.639 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:32.640 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:32.896 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:33.152 00:43:33.152 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:33.152 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:33.152 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.408 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:33.408 { 00:43:33.408 "cntlid": 69, 00:43:33.408 "qid": 0, 00:43:33.408 "state": "enabled", 00:43:33.408 "thread": "nvmf_tgt_poll_group_000", 00:43:33.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:33.408 "listen_address": { 00:43:33.408 "trtype": "TCP", 00:43:33.408 "adrfam": "IPv4", 00:43:33.408 "traddr": "10.0.0.2", 00:43:33.408 "trsvcid": "4420" 00:43:33.408 }, 00:43:33.408 "peer_address": { 00:43:33.408 "trtype": "TCP", 00:43:33.408 "adrfam": "IPv4", 00:43:33.408 "traddr": "10.0.0.1", 00:43:33.408 "trsvcid": "58548" 00:43:33.408 }, 00:43:33.408 "auth": { 00:43:33.408 "state": "completed", 00:43:33.408 "digest": "sha384", 00:43:33.408 "dhgroup": "ffdhe3072" 00:43:33.408 } 00:43:33.408 } 00:43:33.408 ]' 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:33.665 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:33.922 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:33.922 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:34.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:34.852 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:43:35.109 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:35.110 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:35.674 00:43:35.674 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:35.674 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:35.674 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:35.931 { 00:43:35.931 "cntlid": 71, 00:43:35.931 "qid": 0, 00:43:35.931 "state": "enabled", 00:43:35.931 "thread": "nvmf_tgt_poll_group_000", 00:43:35.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:35.931 "listen_address": { 00:43:35.931 "trtype": "TCP", 00:43:35.931 "adrfam": "IPv4", 00:43:35.931 "traddr": "10.0.0.2", 00:43:35.931 "trsvcid": "4420" 00:43:35.931 }, 00:43:35.931 "peer_address": { 00:43:35.931 "trtype": "TCP", 00:43:35.931 "adrfam": "IPv4", 00:43:35.931 "traddr": "10.0.0.1", 00:43:35.931 "trsvcid": "58566" 00:43:35.931 }, 00:43:35.931 "auth": { 00:43:35.931 "state": "completed", 00:43:35.931 "digest": "sha384", 00:43:35.931 "dhgroup": "ffdhe3072" 00:43:35.931 } 00:43:35.931 } 00:43:35.931 ]' 00:43:35.931 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:35.931 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:36.188 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:36.188 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:37.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:37.119 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.377 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.942 00:43:37.942 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:37.942 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:37.942 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:38.201 { 00:43:38.201 "cntlid": 73, 00:43:38.201 "qid": 0, 00:43:38.201 "state": "enabled", 00:43:38.201 "thread": "nvmf_tgt_poll_group_000", 00:43:38.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:38.201 "listen_address": { 00:43:38.201 "trtype": "TCP", 00:43:38.201 "adrfam": "IPv4", 00:43:38.201 "traddr": "10.0.0.2", 00:43:38.201 "trsvcid": "4420" 00:43:38.201 }, 00:43:38.201 "peer_address": { 00:43:38.201 "trtype": "TCP", 00:43:38.201 "adrfam": "IPv4", 00:43:38.201 "traddr": "10.0.0.1", 00:43:38.201 "trsvcid": "58600" 00:43:38.201 }, 00:43:38.201 "auth": { 00:43:38.201 "state": "completed", 00:43:38.201 "digest": "sha384", 00:43:38.201 "dhgroup": "ffdhe4096" 00:43:38.201 } 00:43:38.201 } 00:43:38.201 ]' 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:38.201 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:38.458 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:38.458 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:39.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:39.389 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.646 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.647 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:39.647 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:40.213 00:43:40.213 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:40.213 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:40.213 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:40.471 { 00:43:40.471 "cntlid": 75, 00:43:40.471 "qid": 0, 00:43:40.471 "state": "enabled", 00:43:40.471 "thread": "nvmf_tgt_poll_group_000", 00:43:40.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:40.471 "listen_address": { 00:43:40.471 "trtype": "TCP", 00:43:40.471 "adrfam": "IPv4", 00:43:40.471 "traddr": "10.0.0.2", 00:43:40.471 "trsvcid": "4420" 00:43:40.471 }, 00:43:40.471 "peer_address": { 00:43:40.471 "trtype": "TCP", 00:43:40.471 "adrfam": "IPv4", 00:43:40.471 "traddr": "10.0.0.1", 00:43:40.471 "trsvcid": "58622" 00:43:40.471 }, 00:43:40.471 "auth": { 00:43:40.471 "state": "completed", 00:43:40.471 "digest": "sha384", 00:43:40.471 "dhgroup": "ffdhe4096" 00:43:40.471 } 00:43:40.471 } 00:43:40.471 ]' 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:40.471 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:40.730 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:40.730 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:41.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:41.662 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:41.919 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:42.482 00:43:42.482 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:42.482 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:42.482 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:42.739 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:42.739 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:42.740 { 00:43:42.740 "cntlid": 77, 00:43:42.740 "qid": 0, 00:43:42.740 "state": "enabled", 00:43:42.740 "thread": "nvmf_tgt_poll_group_000", 00:43:42.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:42.740 "listen_address": { 00:43:42.740 "trtype": "TCP", 00:43:42.740 "adrfam": "IPv4", 00:43:42.740 "traddr": "10.0.0.2", 00:43:42.740 "trsvcid": "4420" 00:43:42.740 }, 00:43:42.740 "peer_address": { 00:43:42.740 "trtype": "TCP", 00:43:42.740 "adrfam": "IPv4", 00:43:42.740 "traddr": "10.0.0.1", 00:43:42.740 "trsvcid": "35836" 00:43:42.740 }, 00:43:42.740 "auth": { 00:43:42.740 "state": "completed", 00:43:42.740 "digest": "sha384", 00:43:42.740 "dhgroup": "ffdhe4096" 00:43:42.740 } 00:43:42.740 } 00:43:42.740 ]' 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:42.740 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:43.305 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:43.305 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:43.869 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:44.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:44.127 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:44.384 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:44.642 00:43:44.642 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:44.642 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:44.642 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:44.899 { 00:43:44.899 "cntlid": 79, 00:43:44.899 "qid": 0, 00:43:44.899 "state": "enabled", 00:43:44.899 "thread": "nvmf_tgt_poll_group_000", 00:43:44.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:44.899 "listen_address": { 00:43:44.899 "trtype": "TCP", 00:43:44.899 "adrfam": "IPv4", 00:43:44.899 "traddr": "10.0.0.2", 00:43:44.899 "trsvcid": "4420" 00:43:44.899 }, 00:43:44.899 "peer_address": { 00:43:44.899 "trtype": "TCP", 00:43:44.899 "adrfam": "IPv4", 00:43:44.899 "traddr": "10.0.0.1", 00:43:44.899 "trsvcid": "35864" 00:43:44.899 }, 00:43:44.899 "auth": { 00:43:44.899 "state": "completed", 00:43:44.899 "digest": "sha384", 00:43:44.899 "dhgroup": "ffdhe4096" 00:43:44.899 } 00:43:44.899 } 00:43:44.899 ]' 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:44.899 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:45.157 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:45.157 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:45.157 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:45.157 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:45.157 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:45.414 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:45.414 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:46.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:46.347 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:46.605 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:47.170 00:43:47.170 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:47.170 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:47.170 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:47.427 { 00:43:47.427 "cntlid": 81, 00:43:47.427 "qid": 0, 00:43:47.427 "state": "enabled", 00:43:47.427 "thread": "nvmf_tgt_poll_group_000", 00:43:47.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:47.427 "listen_address": { 00:43:47.427 "trtype": "TCP", 00:43:47.427 "adrfam": "IPv4", 00:43:47.427 "traddr": "10.0.0.2", 00:43:47.427 "trsvcid": "4420" 00:43:47.427 }, 00:43:47.427 "peer_address": { 00:43:47.427 "trtype": "TCP", 00:43:47.427 "adrfam": "IPv4", 00:43:47.427 "traddr": "10.0.0.1", 00:43:47.427 "trsvcid": "35894" 00:43:47.427 }, 00:43:47.427 "auth": { 00:43:47.427 "state": "completed", 00:43:47.427 "digest": "sha384", 00:43:47.427 "dhgroup": "ffdhe6144" 00:43:47.427 } 00:43:47.427 } 00:43:47.427 ]' 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:47.427 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:47.685 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:47.685 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:47.685 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:47.685 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:47.685 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:47.943 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:47.943 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:48.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:48.875 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:49.132 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:49.133 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:49.698 00:43:49.698 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:49.698 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:49.698 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:49.955 { 00:43:49.955 "cntlid": 83, 00:43:49.955 "qid": 0, 00:43:49.955 "state": "enabled", 00:43:49.955 "thread": "nvmf_tgt_poll_group_000", 00:43:49.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:49.955 "listen_address": { 00:43:49.955 "trtype": "TCP", 00:43:49.955 "adrfam": "IPv4", 00:43:49.955 "traddr": "10.0.0.2", 00:43:49.955 "trsvcid": "4420" 00:43:49.955 }, 00:43:49.955 "peer_address": { 00:43:49.955 "trtype": "TCP", 00:43:49.955 "adrfam": "IPv4", 00:43:49.955 "traddr": "10.0.0.1", 00:43:49.955 "trsvcid": "35940" 00:43:49.955 }, 00:43:49.955 "auth": { 00:43:49.955 "state": "completed", 00:43:49.955 "digest": "sha384", 00:43:49.955 "dhgroup": "ffdhe6144" 00:43:49.955 } 00:43:49.955 } 00:43:49.955 ]' 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:49.955 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:50.212 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:50.212 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:51.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:51.143 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.400 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.657 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.657 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.657 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.657 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:52.220 00:43:52.220 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:52.220 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:52.220 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:52.479 { 00:43:52.479 "cntlid": 85, 00:43:52.479 "qid": 0, 00:43:52.479 "state": "enabled", 00:43:52.479 "thread": "nvmf_tgt_poll_group_000", 00:43:52.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:52.479 "listen_address": { 00:43:52.479 "trtype": "TCP", 00:43:52.479 "adrfam": "IPv4", 00:43:52.479 "traddr": "10.0.0.2", 00:43:52.479 "trsvcid": "4420" 00:43:52.479 }, 00:43:52.479 "peer_address": { 00:43:52.479 "trtype": "TCP", 00:43:52.479 "adrfam": "IPv4", 00:43:52.479 "traddr": "10.0.0.1", 00:43:52.479 "trsvcid": "50068" 00:43:52.479 }, 00:43:52.479 "auth": { 00:43:52.479 "state": "completed", 00:43:52.479 "digest": "sha384", 00:43:52.479 "dhgroup": "ffdhe6144" 00:43:52.479 } 00:43:52.479 } 00:43:52.479 ]' 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:52.479 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:52.736 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:52.736 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:53.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:53.669 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:53.927 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:43:54.493 00:43:54.493 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:54.493 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:54.493 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:54.751 { 00:43:54.751 "cntlid": 87, 00:43:54.751 "qid": 0, 00:43:54.751 "state": "enabled", 00:43:54.751 "thread": "nvmf_tgt_poll_group_000", 00:43:54.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:54.751 "listen_address": { 00:43:54.751 "trtype": "TCP", 00:43:54.751 "adrfam": "IPv4", 00:43:54.751 "traddr": "10.0.0.2", 00:43:54.751 "trsvcid": "4420" 00:43:54.751 }, 00:43:54.751 "peer_address": { 00:43:54.751 "trtype": "TCP", 00:43:54.751 "adrfam": "IPv4", 00:43:54.751 "traddr": "10.0.0.1", 00:43:54.751 "trsvcid": "50108" 00:43:54.751 }, 00:43:54.751 "auth": { 00:43:54.751 "state": "completed", 00:43:54.751 "digest": "sha384", 00:43:54.751 "dhgroup": "ffdhe6144" 00:43:54.751 } 00:43:54.751 } 00:43:54.751 ]' 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:54.751 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:55.009 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:55.009 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:55.009 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:55.009 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:55.009 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:55.267 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:55.267 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:56.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:56.200 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:56.458 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:57.391 00:43:57.391 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:43:57.391 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:43:57.391 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:43:57.649 { 00:43:57.649 "cntlid": 89, 00:43:57.649 "qid": 0, 00:43:57.649 "state": "enabled", 00:43:57.649 "thread": "nvmf_tgt_poll_group_000", 00:43:57.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:43:57.649 "listen_address": { 00:43:57.649 "trtype": "TCP", 00:43:57.649 "adrfam": "IPv4", 00:43:57.649 "traddr": "10.0.0.2", 00:43:57.649 "trsvcid": "4420" 00:43:57.649 }, 00:43:57.649 "peer_address": { 00:43:57.649 "trtype": "TCP", 00:43:57.649 "adrfam": "IPv4", 00:43:57.649 "traddr": "10.0.0.1", 00:43:57.649 "trsvcid": "50134" 00:43:57.649 }, 00:43:57.649 "auth": { 00:43:57.649 "state": "completed", 00:43:57.649 "digest": "sha384", 00:43:57.649 "dhgroup": "ffdhe8192" 00:43:57.649 } 00:43:57.649 } 00:43:57.649 ]' 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:57.649 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:57.907 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:57.907 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:43:58.840 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:58.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:58.841 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:43:59.099 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:59.100 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:00.158 00:44:00.158 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:00.158 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:00.158 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:00.452 { 00:44:00.452 "cntlid": 91, 00:44:00.452 "qid": 0, 00:44:00.452 "state": "enabled", 00:44:00.452 "thread": "nvmf_tgt_poll_group_000", 00:44:00.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:00.452 "listen_address": { 00:44:00.452 "trtype": "TCP", 00:44:00.452 "adrfam": "IPv4", 00:44:00.452 "traddr": "10.0.0.2", 00:44:00.452 "trsvcid": "4420" 00:44:00.452 }, 00:44:00.452 "peer_address": { 00:44:00.452 "trtype": "TCP", 00:44:00.452 "adrfam": "IPv4", 00:44:00.452 "traddr": "10.0.0.1", 00:44:00.452 "trsvcid": "50162" 00:44:00.452 }, 00:44:00.452 "auth": { 00:44:00.452 "state": "completed", 00:44:00.452 "digest": "sha384", 00:44:00.452 "dhgroup": "ffdhe8192" 00:44:00.452 } 00:44:00.452 } 00:44:00.452 ]' 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:00.452 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:00.709 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:00.709 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:01.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:01.649 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:01.907 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:02.838 00:44:02.838 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:02.838 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:02.838 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:03.096 { 00:44:03.096 "cntlid": 93, 00:44:03.096 "qid": 0, 00:44:03.096 "state": "enabled", 00:44:03.096 "thread": "nvmf_tgt_poll_group_000", 00:44:03.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:03.096 "listen_address": { 00:44:03.096 "trtype": "TCP", 00:44:03.096 "adrfam": "IPv4", 00:44:03.096 "traddr": "10.0.0.2", 00:44:03.096 "trsvcid": "4420" 00:44:03.096 }, 00:44:03.096 "peer_address": { 00:44:03.096 "trtype": "TCP", 00:44:03.096 "adrfam": "IPv4", 00:44:03.096 "traddr": "10.0.0.1", 00:44:03.096 "trsvcid": "53504" 00:44:03.096 }, 00:44:03.096 "auth": { 00:44:03.096 "state": "completed", 00:44:03.096 "digest": "sha384", 00:44:03.096 "dhgroup": "ffdhe8192" 00:44:03.096 } 00:44:03.096 } 00:44:03.096 ]' 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:03.096 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:03.353 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:03.353 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:04.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:04.284 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:04.850 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:05.784 00:44:05.784 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:05.784 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:05.784 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:05.784 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:06.042 { 00:44:06.042 "cntlid": 95, 00:44:06.042 "qid": 0, 00:44:06.042 "state": "enabled", 00:44:06.042 "thread": "nvmf_tgt_poll_group_000", 00:44:06.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:06.042 "listen_address": { 00:44:06.042 "trtype": "TCP", 00:44:06.042 "adrfam": "IPv4", 00:44:06.042 "traddr": "10.0.0.2", 00:44:06.042 "trsvcid": "4420" 00:44:06.042 }, 00:44:06.042 "peer_address": { 00:44:06.042 "trtype": "TCP", 00:44:06.042 "adrfam": "IPv4", 00:44:06.042 "traddr": "10.0.0.1", 00:44:06.042 "trsvcid": "53528" 00:44:06.042 }, 00:44:06.042 "auth": { 00:44:06.042 "state": "completed", 00:44:06.042 "digest": "sha384", 00:44:06.042 "dhgroup": "ffdhe8192" 00:44:06.042 } 00:44:06.042 } 00:44:06.042 ]' 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:06.042 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:06.043 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:06.300 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:06.301 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:07.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:07.233 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:07.490 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:07.747 00:44:07.747 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:07.747 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:07.747 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:08.004 { 00:44:08.004 "cntlid": 97, 00:44:08.004 "qid": 0, 00:44:08.004 "state": "enabled", 00:44:08.004 "thread": "nvmf_tgt_poll_group_000", 00:44:08.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:08.004 "listen_address": { 00:44:08.004 "trtype": "TCP", 00:44:08.004 "adrfam": "IPv4", 00:44:08.004 "traddr": "10.0.0.2", 00:44:08.004 "trsvcid": "4420" 00:44:08.004 }, 00:44:08.004 "peer_address": { 00:44:08.004 "trtype": "TCP", 00:44:08.004 "adrfam": "IPv4", 00:44:08.004 "traddr": "10.0.0.1", 00:44:08.004 "trsvcid": "53554" 00:44:08.004 }, 00:44:08.004 "auth": { 00:44:08.004 "state": "completed", 00:44:08.004 "digest": "sha512", 00:44:08.004 "dhgroup": "null" 00:44:08.004 } 00:44:08.004 } 00:44:08.004 ]' 00:44:08.004 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:08.262 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:08.520 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:08.520 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:09.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:09.453 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:09.711 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:09.969 00:44:09.969 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:09.969 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:09.969 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:10.226 { 00:44:10.226 "cntlid": 99, 00:44:10.226 "qid": 0, 00:44:10.226 "state": "enabled", 00:44:10.226 "thread": "nvmf_tgt_poll_group_000", 00:44:10.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:10.226 "listen_address": { 00:44:10.226 "trtype": "TCP", 00:44:10.226 "adrfam": "IPv4", 00:44:10.226 "traddr": "10.0.0.2", 00:44:10.226 "trsvcid": "4420" 00:44:10.226 }, 00:44:10.226 "peer_address": { 00:44:10.226 "trtype": "TCP", 00:44:10.226 "adrfam": "IPv4", 00:44:10.226 "traddr": "10.0.0.1", 00:44:10.226 "trsvcid": "53572" 00:44:10.226 }, 00:44:10.226 "auth": { 00:44:10.226 "state": "completed", 00:44:10.226 "digest": "sha512", 00:44:10.226 "dhgroup": "null" 00:44:10.226 } 00:44:10.226 } 00:44:10.226 ]' 00:44:10.226 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:10.484 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:10.742 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:10.742 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:11.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:11.674 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:11.931 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:12.188 00:44:12.188 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:12.188 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:12.188 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:12.445 { 00:44:12.445 "cntlid": 101, 00:44:12.445 "qid": 0, 00:44:12.445 "state": "enabled", 00:44:12.445 "thread": "nvmf_tgt_poll_group_000", 00:44:12.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:12.445 "listen_address": { 00:44:12.445 "trtype": "TCP", 00:44:12.445 "adrfam": "IPv4", 00:44:12.445 "traddr": "10.0.0.2", 00:44:12.445 "trsvcid": "4420" 00:44:12.445 }, 00:44:12.445 "peer_address": { 00:44:12.445 "trtype": "TCP", 00:44:12.445 "adrfam": "IPv4", 00:44:12.445 "traddr": "10.0.0.1", 00:44:12.445 "trsvcid": "36572" 00:44:12.445 }, 00:44:12.445 "auth": { 00:44:12.445 "state": "completed", 00:44:12.445 "digest": "sha512", 00:44:12.445 "dhgroup": "null" 00:44:12.445 } 00:44:12.445 } 00:44:12.445 ]' 00:44:12.445 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:12.702 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:12.960 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:12.960 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:13.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:13.892 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:14.149 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:14.406 00:44:14.406 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:14.406 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:14.406 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:14.969 { 00:44:14.969 "cntlid": 103, 00:44:14.969 "qid": 0, 00:44:14.969 "state": "enabled", 00:44:14.969 "thread": "nvmf_tgt_poll_group_000", 00:44:14.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:14.969 "listen_address": { 00:44:14.969 "trtype": "TCP", 00:44:14.969 "adrfam": "IPv4", 00:44:14.969 "traddr": "10.0.0.2", 00:44:14.969 "trsvcid": "4420" 00:44:14.969 }, 00:44:14.969 "peer_address": { 00:44:14.969 "trtype": "TCP", 00:44:14.969 "adrfam": "IPv4", 00:44:14.969 "traddr": "10.0.0.1", 00:44:14.969 "trsvcid": "36616" 00:44:14.969 }, 00:44:14.969 "auth": { 00:44:14.969 "state": "completed", 00:44:14.969 "digest": "sha512", 00:44:14.969 "dhgroup": "null" 00:44:14.969 } 00:44:14.969 } 00:44:14.969 ]' 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:44:14.969 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:14.969 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:14.969 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:14.969 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:15.225 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:15.225 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:16.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:16.156 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:16.157 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:16.157 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:16.414 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:16.672 00:44:16.672 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:16.672 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:16.672 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:16.930 { 00:44:16.930 "cntlid": 105, 00:44:16.930 "qid": 0, 00:44:16.930 "state": "enabled", 00:44:16.930 "thread": "nvmf_tgt_poll_group_000", 00:44:16.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:16.930 "listen_address": { 00:44:16.930 "trtype": "TCP", 00:44:16.930 "adrfam": "IPv4", 00:44:16.930 "traddr": "10.0.0.2", 00:44:16.930 "trsvcid": "4420" 00:44:16.930 }, 00:44:16.930 "peer_address": { 00:44:16.930 "trtype": "TCP", 00:44:16.930 "adrfam": "IPv4", 00:44:16.930 "traddr": "10.0.0.1", 00:44:16.930 "trsvcid": "36652" 00:44:16.930 }, 00:44:16.930 "auth": { 00:44:16.930 "state": "completed", 00:44:16.930 "digest": "sha512", 00:44:16.930 "dhgroup": "ffdhe2048" 00:44:16.930 } 00:44:16.930 } 00:44:16.930 ]' 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:16.930 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:17.188 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:17.188 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:17.188 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:17.188 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:17.188 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:17.446 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:17.446 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:18.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:18.379 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:18.637 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:18.895 00:44:18.895 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:18.895 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:18.895 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:19.153 { 00:44:19.153 "cntlid": 107, 00:44:19.153 "qid": 0, 00:44:19.153 "state": "enabled", 00:44:19.153 "thread": "nvmf_tgt_poll_group_000", 00:44:19.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:19.153 "listen_address": { 00:44:19.153 "trtype": "TCP", 00:44:19.153 "adrfam": "IPv4", 00:44:19.153 "traddr": "10.0.0.2", 00:44:19.153 "trsvcid": "4420" 00:44:19.153 }, 00:44:19.153 "peer_address": { 00:44:19.153 "trtype": "TCP", 00:44:19.153 "adrfam": "IPv4", 00:44:19.153 "traddr": "10.0.0.1", 00:44:19.153 "trsvcid": "36680" 00:44:19.153 }, 00:44:19.153 "auth": { 00:44:19.153 "state": "completed", 00:44:19.153 "digest": "sha512", 00:44:19.153 "dhgroup": "ffdhe2048" 00:44:19.153 } 00:44:19.153 } 00:44:19.153 ]' 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:19.153 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:19.411 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:19.411 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:19.411 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:19.668 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:19.668 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:20.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:20.598 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:20.854 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.110 00:44:21.110 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:21.110 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:21.110 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.366 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:21.366 { 00:44:21.366 "cntlid": 109, 00:44:21.366 "qid": 0, 00:44:21.366 "state": "enabled", 00:44:21.366 "thread": "nvmf_tgt_poll_group_000", 00:44:21.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:21.366 "listen_address": { 00:44:21.366 "trtype": "TCP", 00:44:21.366 "adrfam": "IPv4", 00:44:21.366 "traddr": "10.0.0.2", 00:44:21.366 "trsvcid": "4420" 00:44:21.366 }, 00:44:21.366 "peer_address": { 00:44:21.366 "trtype": "TCP", 00:44:21.366 "adrfam": "IPv4", 00:44:21.366 "traddr": "10.0.0.1", 00:44:21.366 "trsvcid": "36700" 00:44:21.366 }, 00:44:21.366 "auth": { 00:44:21.366 "state": "completed", 00:44:21.366 "digest": "sha512", 00:44:21.367 "dhgroup": "ffdhe2048" 00:44:21.367 } 00:44:21.367 } 00:44:21.367 ]' 00:44:21.367 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:21.367 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:21.367 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:21.367 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:21.367 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:21.623 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:21.623 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:21.623 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:21.879 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:21.879 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:22.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:22.809 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:22.810 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:23.067 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:23.324 00:44:23.324 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:23.324 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:23.324 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:23.581 { 00:44:23.581 "cntlid": 111, 00:44:23.581 "qid": 0, 00:44:23.581 "state": "enabled", 00:44:23.581 "thread": "nvmf_tgt_poll_group_000", 00:44:23.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:23.581 "listen_address": { 00:44:23.581 "trtype": "TCP", 00:44:23.581 "adrfam": "IPv4", 00:44:23.581 "traddr": "10.0.0.2", 00:44:23.581 "trsvcid": "4420" 00:44:23.581 }, 00:44:23.581 "peer_address": { 00:44:23.581 "trtype": "TCP", 00:44:23.581 "adrfam": "IPv4", 00:44:23.581 "traddr": "10.0.0.1", 00:44:23.581 "trsvcid": "46790" 00:44:23.581 }, 00:44:23.581 "auth": { 00:44:23.581 "state": "completed", 00:44:23.581 "digest": "sha512", 00:44:23.581 "dhgroup": "ffdhe2048" 00:44:23.581 } 00:44:23.581 } 00:44:23.581 ]' 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:23.581 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:23.838 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:23.838 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:24.771 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:24.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:24.771 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:24.771 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.771 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.771 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.772 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:24.772 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:24.772 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:24.772 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:25.029 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:44:25.029 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:25.029 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:25.030 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:25.595 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:25.595 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:25.853 { 00:44:25.853 "cntlid": 113, 00:44:25.853 "qid": 0, 00:44:25.853 "state": "enabled", 00:44:25.853 "thread": "nvmf_tgt_poll_group_000", 00:44:25.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:25.853 "listen_address": { 00:44:25.853 "trtype": "TCP", 00:44:25.853 "adrfam": "IPv4", 00:44:25.853 "traddr": "10.0.0.2", 00:44:25.853 "trsvcid": "4420" 00:44:25.853 }, 00:44:25.853 "peer_address": { 00:44:25.853 "trtype": "TCP", 00:44:25.853 "adrfam": "IPv4", 00:44:25.853 "traddr": "10.0.0.1", 00:44:25.853 "trsvcid": "46826" 00:44:25.853 }, 00:44:25.853 "auth": { 00:44:25.853 "state": "completed", 00:44:25.853 "digest": "sha512", 00:44:25.853 "dhgroup": "ffdhe3072" 00:44:25.853 } 00:44:25.853 } 00:44:25.853 ]' 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:25.853 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:26.111 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:26.111 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:27.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:27.044 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:27.302 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:27.559 00:44:27.817 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:27.817 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:27.817 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:28.075 { 00:44:28.075 "cntlid": 115, 00:44:28.075 "qid": 0, 00:44:28.075 "state": "enabled", 00:44:28.075 "thread": "nvmf_tgt_poll_group_000", 00:44:28.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:28.075 "listen_address": { 00:44:28.075 "trtype": "TCP", 00:44:28.075 "adrfam": "IPv4", 00:44:28.075 "traddr": "10.0.0.2", 00:44:28.075 "trsvcid": "4420" 00:44:28.075 }, 00:44:28.075 "peer_address": { 00:44:28.075 "trtype": "TCP", 00:44:28.075 "adrfam": "IPv4", 00:44:28.075 "traddr": "10.0.0.1", 00:44:28.075 "trsvcid": "46868" 00:44:28.075 }, 00:44:28.075 "auth": { 00:44:28.075 "state": "completed", 00:44:28.075 "digest": "sha512", 00:44:28.075 "dhgroup": "ffdhe3072" 00:44:28.075 } 00:44:28.075 } 00:44:28.075 ]' 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:28.075 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:28.333 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:28.333 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:29.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:29.267 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:29.525 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:29.783 00:44:30.041 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:30.041 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:30.041 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:30.299 { 00:44:30.299 "cntlid": 117, 00:44:30.299 "qid": 0, 00:44:30.299 "state": "enabled", 00:44:30.299 "thread": "nvmf_tgt_poll_group_000", 00:44:30.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:30.299 "listen_address": { 00:44:30.299 "trtype": "TCP", 00:44:30.299 "adrfam": "IPv4", 00:44:30.299 "traddr": "10.0.0.2", 00:44:30.299 "trsvcid": "4420" 00:44:30.299 }, 00:44:30.299 "peer_address": { 00:44:30.299 "trtype": "TCP", 00:44:30.299 "adrfam": "IPv4", 00:44:30.299 "traddr": "10.0.0.1", 00:44:30.299 "trsvcid": "46900" 00:44:30.299 }, 00:44:30.299 "auth": { 00:44:30.299 "state": "completed", 00:44:30.299 "digest": "sha512", 00:44:30.299 "dhgroup": "ffdhe3072" 00:44:30.299 } 00:44:30.299 } 00:44:30.299 ]' 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:30.299 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:30.558 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:30.558 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:31.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:31.490 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:31.748 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:32.005 00:44:32.261 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:32.261 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:32.261 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:32.517 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:32.517 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:32.517 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:32.517 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.517 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:32.518 { 00:44:32.518 "cntlid": 119, 00:44:32.518 "qid": 0, 00:44:32.518 "state": "enabled", 00:44:32.518 "thread": "nvmf_tgt_poll_group_000", 00:44:32.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:32.518 "listen_address": { 00:44:32.518 "trtype": "TCP", 00:44:32.518 "adrfam": "IPv4", 00:44:32.518 "traddr": "10.0.0.2", 00:44:32.518 "trsvcid": "4420" 00:44:32.518 }, 00:44:32.518 "peer_address": { 00:44:32.518 "trtype": "TCP", 00:44:32.518 "adrfam": "IPv4", 00:44:32.518 "traddr": "10.0.0.1", 00:44:32.518 "trsvcid": "49346" 00:44:32.518 }, 00:44:32.518 "auth": { 00:44:32.518 "state": "completed", 00:44:32.518 "digest": "sha512", 00:44:32.518 "dhgroup": "ffdhe3072" 00:44:32.518 } 00:44:32.518 } 00:44:32.518 ]' 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:32.518 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:32.774 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:33.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:33.707 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:33.963 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:34.527 00:44:34.527 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:34.527 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:34.527 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:34.785 { 00:44:34.785 "cntlid": 121, 00:44:34.785 "qid": 0, 00:44:34.785 "state": "enabled", 00:44:34.785 "thread": "nvmf_tgt_poll_group_000", 00:44:34.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:34.785 "listen_address": { 00:44:34.785 "trtype": "TCP", 00:44:34.785 "adrfam": "IPv4", 00:44:34.785 "traddr": "10.0.0.2", 00:44:34.785 "trsvcid": "4420" 00:44:34.785 }, 00:44:34.785 "peer_address": { 00:44:34.785 "trtype": "TCP", 00:44:34.785 "adrfam": "IPv4", 00:44:34.785 "traddr": "10.0.0.1", 00:44:34.785 "trsvcid": "49378" 00:44:34.785 }, 00:44:34.785 "auth": { 00:44:34.785 "state": "completed", 00:44:34.785 "digest": "sha512", 00:44:34.785 "dhgroup": "ffdhe4096" 00:44:34.785 } 00:44:34.785 } 00:44:34.785 ]' 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:34.785 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:35.043 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:35.043 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:35.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:35.979 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:36.237 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:36.802 00:44:36.803 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:36.803 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:36.803 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:37.060 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:37.060 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:37.060 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:37.061 { 00:44:37.061 "cntlid": 123, 00:44:37.061 "qid": 0, 00:44:37.061 "state": "enabled", 00:44:37.061 "thread": "nvmf_tgt_poll_group_000", 00:44:37.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:37.061 "listen_address": { 00:44:37.061 "trtype": "TCP", 00:44:37.061 "adrfam": "IPv4", 00:44:37.061 "traddr": "10.0.0.2", 00:44:37.061 "trsvcid": "4420" 00:44:37.061 }, 00:44:37.061 "peer_address": { 00:44:37.061 "trtype": "TCP", 00:44:37.061 "adrfam": "IPv4", 00:44:37.061 "traddr": "10.0.0.1", 00:44:37.061 "trsvcid": "49412" 00:44:37.061 }, 00:44:37.061 "auth": { 00:44:37.061 "state": "completed", 00:44:37.061 "digest": "sha512", 00:44:37.061 "dhgroup": "ffdhe4096" 00:44:37.061 } 00:44:37.061 } 00:44:37.061 ]' 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:37.061 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:37.319 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:37.319 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:38.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:38.252 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:38.511 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:39.077 00:44:39.077 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:39.077 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:39.077 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:39.334 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:39.334 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:39.334 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:39.335 { 00:44:39.335 "cntlid": 125, 00:44:39.335 "qid": 0, 00:44:39.335 "state": "enabled", 00:44:39.335 "thread": "nvmf_tgt_poll_group_000", 00:44:39.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:39.335 "listen_address": { 00:44:39.335 "trtype": "TCP", 00:44:39.335 "adrfam": "IPv4", 00:44:39.335 "traddr": "10.0.0.2", 00:44:39.335 "trsvcid": "4420" 00:44:39.335 }, 00:44:39.335 "peer_address": { 00:44:39.335 "trtype": "TCP", 00:44:39.335 "adrfam": "IPv4", 00:44:39.335 "traddr": "10.0.0.1", 00:44:39.335 "trsvcid": "49434" 00:44:39.335 }, 00:44:39.335 "auth": { 00:44:39.335 "state": "completed", 00:44:39.335 "digest": "sha512", 00:44:39.335 "dhgroup": "ffdhe4096" 00:44:39.335 } 00:44:39.335 } 00:44:39.335 ]' 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:39.335 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:39.966 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:39.966 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:40.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:40.584 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:40.842 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:41.406 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.406 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.663 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.663 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:41.663 { 00:44:41.663 "cntlid": 127, 00:44:41.663 "qid": 0, 00:44:41.663 "state": "enabled", 00:44:41.663 "thread": "nvmf_tgt_poll_group_000", 00:44:41.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:41.663 "listen_address": { 00:44:41.663 "trtype": "TCP", 00:44:41.663 "adrfam": "IPv4", 00:44:41.663 "traddr": "10.0.0.2", 00:44:41.663 "trsvcid": "4420" 00:44:41.663 }, 00:44:41.663 "peer_address": { 00:44:41.663 "trtype": "TCP", 00:44:41.663 "adrfam": "IPv4", 00:44:41.663 "traddr": "10.0.0.1", 00:44:41.663 "trsvcid": "49468" 00:44:41.663 }, 00:44:41.663 "auth": { 00:44:41.663 "state": "completed", 00:44:41.663 "digest": "sha512", 00:44:41.663 "dhgroup": "ffdhe4096" 00:44:41.663 } 00:44:41.663 } 00:44:41.663 ]' 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:41.664 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:41.921 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:41.921 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:42.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:42.852 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:43.110 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:43.674 00:44:43.674 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:43.674 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:43.674 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:43.931 { 00:44:43.931 "cntlid": 129, 00:44:43.931 "qid": 0, 00:44:43.931 "state": "enabled", 00:44:43.931 "thread": "nvmf_tgt_poll_group_000", 00:44:43.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:43.931 "listen_address": { 00:44:43.931 "trtype": "TCP", 00:44:43.931 "adrfam": "IPv4", 00:44:43.931 "traddr": "10.0.0.2", 00:44:43.931 "trsvcid": "4420" 00:44:43.931 }, 00:44:43.931 "peer_address": { 00:44:43.931 "trtype": "TCP", 00:44:43.931 "adrfam": "IPv4", 00:44:43.931 "traddr": "10.0.0.1", 00:44:43.931 "trsvcid": "34364" 00:44:43.931 }, 00:44:43.931 "auth": { 00:44:43.931 "state": "completed", 00:44:43.931 "digest": "sha512", 00:44:43.931 "dhgroup": "ffdhe6144" 00:44:43.931 } 00:44:43.931 } 00:44:43.931 ]' 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:43.931 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:43.932 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:43.932 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:43.932 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:43.932 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:44.497 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:44.497 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:45.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:45.427 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.684 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:45.684 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:45.684 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:45.684 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:46.248 00:44:46.248 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:46.248 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:46.248 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:46.505 { 00:44:46.505 "cntlid": 131, 00:44:46.505 "qid": 0, 00:44:46.505 "state": "enabled", 00:44:46.505 "thread": "nvmf_tgt_poll_group_000", 00:44:46.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:46.505 "listen_address": { 00:44:46.505 "trtype": "TCP", 00:44:46.505 "adrfam": "IPv4", 00:44:46.505 "traddr": "10.0.0.2", 00:44:46.505 "trsvcid": "4420" 00:44:46.505 }, 00:44:46.505 "peer_address": { 00:44:46.505 "trtype": "TCP", 00:44:46.505 "adrfam": "IPv4", 00:44:46.505 "traddr": "10.0.0.1", 00:44:46.505 "trsvcid": "34394" 00:44:46.505 }, 00:44:46.505 "auth": { 00:44:46.505 "state": "completed", 00:44:46.505 "digest": "sha512", 00:44:46.505 "dhgroup": "ffdhe6144" 00:44:46.505 } 00:44:46.505 } 00:44:46.505 ]' 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:46.505 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:47.069 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:47.069 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:48.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:48.002 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:48.258 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:48.822 00:44:48.822 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:48.822 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:48.822 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:49.080 { 00:44:49.080 "cntlid": 133, 00:44:49.080 "qid": 0, 00:44:49.080 "state": "enabled", 00:44:49.080 "thread": "nvmf_tgt_poll_group_000", 00:44:49.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:49.080 "listen_address": { 00:44:49.080 "trtype": "TCP", 00:44:49.080 "adrfam": "IPv4", 00:44:49.080 "traddr": "10.0.0.2", 00:44:49.080 "trsvcid": "4420" 00:44:49.080 }, 00:44:49.080 "peer_address": { 00:44:49.080 "trtype": "TCP", 00:44:49.080 "adrfam": "IPv4", 00:44:49.080 "traddr": "10.0.0.1", 00:44:49.080 "trsvcid": "34418" 00:44:49.080 }, 00:44:49.080 "auth": { 00:44:49.080 "state": "completed", 00:44:49.080 "digest": "sha512", 00:44:49.080 "dhgroup": "ffdhe6144" 00:44:49.080 } 00:44:49.080 } 00:44:49.080 ]' 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:49.080 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:49.338 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:49.338 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:49.338 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:49.596 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:49.596 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:50.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:50.529 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:50.787 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:44:51.353 00:44:51.353 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:51.353 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:51.353 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:51.610 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:51.610 { 00:44:51.610 "cntlid": 135, 00:44:51.610 "qid": 0, 00:44:51.610 "state": "enabled", 00:44:51.611 "thread": "nvmf_tgt_poll_group_000", 00:44:51.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:51.611 "listen_address": { 00:44:51.611 "trtype": "TCP", 00:44:51.611 "adrfam": "IPv4", 00:44:51.611 "traddr": "10.0.0.2", 00:44:51.611 "trsvcid": "4420" 00:44:51.611 }, 00:44:51.611 "peer_address": { 00:44:51.611 "trtype": "TCP", 00:44:51.611 "adrfam": "IPv4", 00:44:51.611 "traddr": "10.0.0.1", 00:44:51.611 "trsvcid": "34448" 00:44:51.611 }, 00:44:51.611 "auth": { 00:44:51.611 "state": "completed", 00:44:51.611 "digest": "sha512", 00:44:51.611 "dhgroup": "ffdhe6144" 00:44:51.611 } 00:44:51.611 } 00:44:51.611 ]' 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:51.611 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:51.868 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:51.868 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:52.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:52.801 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:44:52.802 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:52.802 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:52.802 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:53.060 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:53.994 00:44:53.994 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:53.994 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:53.994 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:54.252 { 00:44:54.252 "cntlid": 137, 00:44:54.252 "qid": 0, 00:44:54.252 "state": "enabled", 00:44:54.252 "thread": "nvmf_tgt_poll_group_000", 00:44:54.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:54.252 "listen_address": { 00:44:54.252 "trtype": "TCP", 00:44:54.252 "adrfam": "IPv4", 00:44:54.252 "traddr": "10.0.0.2", 00:44:54.252 "trsvcid": "4420" 00:44:54.252 }, 00:44:54.252 "peer_address": { 00:44:54.252 "trtype": "TCP", 00:44:54.252 "adrfam": "IPv4", 00:44:54.252 "traddr": "10.0.0.1", 00:44:54.252 "trsvcid": "47376" 00:44:54.252 }, 00:44:54.252 "auth": { 00:44:54.252 "state": "completed", 00:44:54.252 "digest": "sha512", 00:44:54.252 "dhgroup": "ffdhe8192" 00:44:54.252 } 00:44:54.252 } 00:44:54.252 ]' 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:54.252 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:54.510 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:54.510 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:44:55.444 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:55.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:55.701 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:55.958 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:56.887 00:44:56.887 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:56.887 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:56.887 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:57.145 { 00:44:57.145 "cntlid": 139, 00:44:57.145 "qid": 0, 00:44:57.145 "state": "enabled", 00:44:57.145 "thread": "nvmf_tgt_poll_group_000", 00:44:57.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:57.145 "listen_address": { 00:44:57.145 "trtype": "TCP", 00:44:57.145 "adrfam": "IPv4", 00:44:57.145 "traddr": "10.0.0.2", 00:44:57.145 "trsvcid": "4420" 00:44:57.145 }, 00:44:57.145 "peer_address": { 00:44:57.145 "trtype": "TCP", 00:44:57.145 "adrfam": "IPv4", 00:44:57.145 "traddr": "10.0.0.1", 00:44:57.145 "trsvcid": "47382" 00:44:57.145 }, 00:44:57.145 "auth": { 00:44:57.145 "state": "completed", 00:44:57.145 "digest": "sha512", 00:44:57.145 "dhgroup": "ffdhe8192" 00:44:57.145 } 00:44:57.145 } 00:44:57.145 ]' 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:57.145 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:57.748 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:57.748 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: --dhchap-ctrl-secret DHHC-1:02:NTJiZDgxMDFlYWI3ODBhMDMwODMwODNlZmMzNGYwOGI2YTllMTFiM2Q0ZDE4OWQ2YA4Byw==: 00:44:58.679 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:58.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:58.679 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:58.679 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.680 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:59.612 00:44:59.612 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:44:59.612 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:44:59.612 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:44:59.871 { 00:44:59.871 "cntlid": 141, 00:44:59.871 "qid": 0, 00:44:59.871 "state": "enabled", 00:44:59.871 "thread": "nvmf_tgt_poll_group_000", 00:44:59.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:44:59.871 "listen_address": { 00:44:59.871 "trtype": "TCP", 00:44:59.871 "adrfam": "IPv4", 00:44:59.871 "traddr": "10.0.0.2", 00:44:59.871 "trsvcid": "4420" 00:44:59.871 }, 00:44:59.871 "peer_address": { 00:44:59.871 "trtype": "TCP", 00:44:59.871 "adrfam": "IPv4", 00:44:59.871 "traddr": "10.0.0.1", 00:44:59.871 "trsvcid": "47404" 00:44:59.871 }, 00:44:59.871 "auth": { 00:44:59.871 "state": "completed", 00:44:59.871 "digest": "sha512", 00:44:59.871 "dhgroup": "ffdhe8192" 00:44:59.871 } 00:44:59.871 } 00:44:59.871 ]' 00:44:59.871 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:44:59.871 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:59.871 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:44:59.871 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:59.871 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:00.130 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:00.130 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:00.130 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:00.388 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:45:00.388 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:01:NzQ3Y2JiMzhjZjgwYzRlMTQ0YTMzOTRiYWM3MGM0OWYtYbZn: 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:01.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:01.321 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:01.578 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:02.511 00:45:02.511 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:02.511 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:02.511 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:02.769 { 00:45:02.769 "cntlid": 143, 00:45:02.769 "qid": 0, 00:45:02.769 "state": "enabled", 00:45:02.769 "thread": "nvmf_tgt_poll_group_000", 00:45:02.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:02.769 "listen_address": { 00:45:02.769 "trtype": "TCP", 00:45:02.769 "adrfam": "IPv4", 00:45:02.769 "traddr": "10.0.0.2", 00:45:02.769 "trsvcid": "4420" 00:45:02.769 }, 00:45:02.769 "peer_address": { 00:45:02.769 "trtype": "TCP", 00:45:02.769 "adrfam": "IPv4", 00:45:02.769 "traddr": "10.0.0.1", 00:45:02.769 "trsvcid": "41898" 00:45:02.769 }, 00:45:02.769 "auth": { 00:45:02.769 "state": "completed", 00:45:02.769 "digest": "sha512", 00:45:02.769 "dhgroup": "ffdhe8192" 00:45:02.769 } 00:45:02.769 } 00:45:02.769 ]' 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:02.769 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:03.027 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:03.027 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:03.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:03.960 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.217 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:04.476 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.476 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:04.476 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:04.476 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:05.408 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:05.409 { 00:45:05.409 "cntlid": 145, 00:45:05.409 "qid": 0, 00:45:05.409 "state": "enabled", 00:45:05.409 "thread": "nvmf_tgt_poll_group_000", 00:45:05.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:05.409 "listen_address": { 00:45:05.409 "trtype": "TCP", 00:45:05.409 "adrfam": "IPv4", 00:45:05.409 "traddr": "10.0.0.2", 00:45:05.409 "trsvcid": "4420" 00:45:05.409 }, 00:45:05.409 "peer_address": { 00:45:05.409 "trtype": "TCP", 00:45:05.409 "adrfam": "IPv4", 00:45:05.409 "traddr": "10.0.0.1", 00:45:05.409 "trsvcid": "41930" 00:45:05.409 }, 00:45:05.409 "auth": { 00:45:05.409 "state": "completed", 00:45:05.409 "digest": "sha512", 00:45:05.409 "dhgroup": "ffdhe8192" 00:45:05.409 } 00:45:05.409 } 00:45:05.409 ]' 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:05.409 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:05.667 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:05.667 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:05.667 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:05.667 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:05.667 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:05.924 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:45:05.924 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MzcwYzJkMGM4NzY4N2NiZmYxNTFkOGY4MWRmODJjMzRkMmU1MzgxNmFlZDY3NzZjLmZTaQ==: --dhchap-ctrl-secret DHHC-1:03:NTJhMmFjYjg2MGZjZjdhNzhkYWIwNmU1YmNiM2RmYmNiYTNmMzYxMTlkOTI0NmFjM2M2Y2E1YjNkY2RjYzcwZTCyLRI=: 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:06.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:45:06.858 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:45:07.788 request: 00:45:07.788 { 00:45:07.788 "name": "nvme0", 00:45:07.788 "trtype": "tcp", 00:45:07.788 "traddr": "10.0.0.2", 00:45:07.788 "adrfam": "ipv4", 00:45:07.788 "trsvcid": "4420", 00:45:07.788 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:07.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:07.788 "prchk_reftag": false, 00:45:07.788 "prchk_guard": false, 00:45:07.788 "hdgst": false, 00:45:07.788 "ddgst": false, 00:45:07.788 "dhchap_key": "key2", 00:45:07.788 "allow_unrecognized_csi": false, 00:45:07.788 "method": "bdev_nvme_attach_controller", 00:45:07.788 "req_id": 1 00:45:07.788 } 00:45:07.788 Got JSON-RPC error response 00:45:07.788 response: 00:45:07.788 { 00:45:07.788 "code": -5, 00:45:07.788 "message": "Input/output error" 00:45:07.788 } 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:07.788 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:08.352 request: 00:45:08.352 { 00:45:08.352 "name": "nvme0", 00:45:08.352 "trtype": "tcp", 00:45:08.352 "traddr": "10.0.0.2", 00:45:08.352 "adrfam": "ipv4", 00:45:08.352 "trsvcid": "4420", 00:45:08.352 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:08.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:08.352 "prchk_reftag": false, 00:45:08.352 "prchk_guard": false, 00:45:08.352 "hdgst": false, 00:45:08.352 "ddgst": false, 00:45:08.352 "dhchap_key": "key1", 00:45:08.352 "dhchap_ctrlr_key": "ckey2", 00:45:08.352 "allow_unrecognized_csi": false, 00:45:08.352 "method": "bdev_nvme_attach_controller", 00:45:08.352 "req_id": 1 00:45:08.352 } 00:45:08.352 Got JSON-RPC error response 00:45:08.352 response: 00:45:08.352 { 00:45:08.352 "code": -5, 00:45:08.352 "message": "Input/output error" 00:45:08.352 } 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.609 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:09.541 request: 00:45:09.541 { 00:45:09.541 "name": "nvme0", 00:45:09.541 "trtype": "tcp", 00:45:09.541 "traddr": "10.0.0.2", 00:45:09.541 "adrfam": "ipv4", 00:45:09.541 "trsvcid": "4420", 00:45:09.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:09.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:09.541 "prchk_reftag": false, 00:45:09.541 "prchk_guard": false, 00:45:09.541 "hdgst": false, 00:45:09.541 "ddgst": false, 00:45:09.541 "dhchap_key": "key1", 00:45:09.541 "dhchap_ctrlr_key": "ckey1", 00:45:09.541 "allow_unrecognized_csi": false, 00:45:09.541 "method": "bdev_nvme_attach_controller", 00:45:09.541 "req_id": 1 00:45:09.541 } 00:45:09.541 Got JSON-RPC error response 00:45:09.541 response: 00:45:09.541 { 00:45:09.541 "code": -5, 00:45:09.541 "message": "Input/output error" 00:45:09.541 } 00:45:09.541 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 636258 ']' 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636258' 00:45:09.542 killing process with pid 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 636258 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=659022 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 659022 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 659022 ']' 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:09.542 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:09.799 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:09.799 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:45:09.799 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:09.799 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:09.799 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 659022 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 659022 ']' 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:10.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:10.056 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.313 null0 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wJe 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.MkE ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MkE 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.seS 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.b9b ]] 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.b9b 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.313 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.j0e 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.OSW ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OSW 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RxH 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.570 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:10.571 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:10.571 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:10.571 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:11.943 nvme0n1 00:45:11.943 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:45:11.943 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:45:11.943 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:45:12.200 { 00:45:12.200 "cntlid": 1, 00:45:12.200 "qid": 0, 00:45:12.200 "state": "enabled", 00:45:12.200 "thread": "nvmf_tgt_poll_group_000", 00:45:12.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:12.200 "listen_address": { 00:45:12.200 "trtype": "TCP", 00:45:12.200 "adrfam": "IPv4", 00:45:12.200 "traddr": "10.0.0.2", 00:45:12.200 "trsvcid": "4420" 00:45:12.200 }, 00:45:12.200 "peer_address": { 00:45:12.200 "trtype": "TCP", 00:45:12.200 "adrfam": "IPv4", 00:45:12.200 "traddr": "10.0.0.1", 00:45:12.200 "trsvcid": "41994" 00:45:12.200 }, 00:45:12.200 "auth": { 00:45:12.200 "state": "completed", 00:45:12.200 "digest": "sha512", 00:45:12.200 "dhgroup": "ffdhe8192" 00:45:12.200 } 00:45:12.200 } 00:45:12.200 ]' 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:12.200 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:12.458 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:12.458 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:13.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:45:13.388 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:13.645 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:14.211 request: 00:45:14.211 { 00:45:14.211 "name": "nvme0", 00:45:14.211 "trtype": "tcp", 00:45:14.211 "traddr": "10.0.0.2", 00:45:14.211 "adrfam": "ipv4", 00:45:14.211 "trsvcid": "4420", 00:45:14.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:14.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:14.211 "prchk_reftag": false, 00:45:14.211 "prchk_guard": false, 00:45:14.211 "hdgst": false, 00:45:14.211 "ddgst": false, 00:45:14.211 "dhchap_key": "key3", 00:45:14.211 "allow_unrecognized_csi": false, 00:45:14.211 "method": "bdev_nvme_attach_controller", 00:45:14.211 "req_id": 1 00:45:14.211 } 00:45:14.211 Got JSON-RPC error response 00:45:14.211 response: 00:45:14.211 { 00:45:14.211 "code": -5, 00:45:14.211 "message": "Input/output error" 00:45:14.211 } 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:14.211 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:45:14.469 request: 00:45:14.469 { 00:45:14.469 "name": "nvme0", 00:45:14.469 "trtype": "tcp", 00:45:14.469 "traddr": "10.0.0.2", 00:45:14.469 "adrfam": "ipv4", 00:45:14.469 "trsvcid": "4420", 00:45:14.469 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:14.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:14.469 "prchk_reftag": false, 00:45:14.469 "prchk_guard": false, 00:45:14.469 "hdgst": false, 00:45:14.469 "ddgst": false, 00:45:14.469 "dhchap_key": "key3", 00:45:14.469 "allow_unrecognized_csi": false, 00:45:14.469 "method": "bdev_nvme_attach_controller", 00:45:14.469 "req_id": 1 00:45:14.469 } 00:45:14.469 Got JSON-RPC error response 00:45:14.469 response: 00:45:14.469 { 00:45:14.469 "code": -5, 00:45:14.469 "message": "Input/output error" 00:45:14.469 } 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:14.727 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:14.984 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:14.984 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:14.984 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:14.984 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:14.984 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:15.549 request: 00:45:15.549 { 00:45:15.549 "name": "nvme0", 00:45:15.549 "trtype": "tcp", 00:45:15.549 "traddr": "10.0.0.2", 00:45:15.549 "adrfam": "ipv4", 00:45:15.549 "trsvcid": "4420", 00:45:15.549 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:15.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:15.549 "prchk_reftag": false, 00:45:15.549 "prchk_guard": false, 00:45:15.549 "hdgst": false, 00:45:15.549 "ddgst": false, 00:45:15.549 "dhchap_key": "key0", 00:45:15.549 "dhchap_ctrlr_key": "key1", 00:45:15.549 "allow_unrecognized_csi": false, 00:45:15.549 "method": "bdev_nvme_attach_controller", 00:45:15.549 "req_id": 1 00:45:15.549 } 00:45:15.549 Got JSON-RPC error response 00:45:15.549 response: 00:45:15.549 { 00:45:15.549 "code": -5, 00:45:15.549 "message": "Input/output error" 00:45:15.549 } 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:45:15.549 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:45:15.808 nvme0n1 00:45:15.808 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:45:15.808 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:45:15.808 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:16.066 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:16.066 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:16.066 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:45:16.324 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:45:17.696 nvme0n1 00:45:17.696 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:45:17.696 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:45:17.696 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:45:17.953 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:18.220 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:18.220 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:18.220 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: --dhchap-ctrl-secret DHHC-1:03:MmYwM2I0Y2Q1ZDZjYjU3NzZjYjFlMDk4YmE2MmYzYjQzMmMwZmExZGNkZTcxMTk1NTc4NThiYzZmYWFiNDk4Yknq5ts=: 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:19.149 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:45:19.405 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:45:20.363 request: 00:45:20.363 { 00:45:20.363 "name": "nvme0", 00:45:20.363 "trtype": "tcp", 00:45:20.363 "traddr": "10.0.0.2", 00:45:20.363 "adrfam": "ipv4", 00:45:20.363 "trsvcid": "4420", 00:45:20.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:20.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:45:20.363 "prchk_reftag": false, 00:45:20.363 "prchk_guard": false, 00:45:20.363 "hdgst": false, 00:45:20.363 "ddgst": false, 00:45:20.363 "dhchap_key": "key1", 00:45:20.363 "allow_unrecognized_csi": false, 00:45:20.363 "method": "bdev_nvme_attach_controller", 00:45:20.363 "req_id": 1 00:45:20.363 } 00:45:20.363 Got JSON-RPC error response 00:45:20.363 response: 00:45:20.363 { 00:45:20.363 "code": -5, 00:45:20.363 "message": "Input/output error" 00:45:20.363 } 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:20.363 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:21.837 nvme0n1 00:45:21.837 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:45:21.837 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:45:21.837 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:21.837 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.837 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:21.837 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:45:22.095 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:45:22.660 nvme0n1 00:45:22.660 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:45:22.660 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:45:22.660 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:22.918 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:22.918 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:22.918 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: '' 2s 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: ]] 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmY1NmUwNWY5ZWUwYWZhYzllOWEzZmFlMjBlODVkNjK/g6Tz: 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:45:23.176 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: 2s 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: ]] 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzkyZGY0YzkzZTYxOTgyZGE1ZjZmZGM4OGJmNDI5YzNjYTc0NTM2NGU0MDU5ZjVkZJ05kA==: 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:45:25.097 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:27.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:27.624 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:28.557 nvme0n1 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:28.557 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:45:29.490 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.055 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:30.312 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:45:30.878 request: 00:45:30.878 { 00:45:30.878 "name": "nvme0", 00:45:30.878 "dhchap_key": "key1", 00:45:30.878 "dhchap_ctrlr_key": "key3", 00:45:30.878 "method": "bdev_nvme_set_keys", 00:45:30.878 "req_id": 1 00:45:30.878 } 00:45:30.878 Got JSON-RPC error response 00:45:30.878 response: 00:45:30.878 { 00:45:30.878 "code": -13, 00:45:30.878 "message": "Permission denied" 00:45:30.878 } 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:45:30.878 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:31.134 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:45:31.134 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:32.503 05:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:45:33.871 nvme0n1 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:33.871 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:45:34.800 request: 00:45:34.800 { 00:45:34.800 "name": "nvme0", 00:45:34.800 "dhchap_key": "key2", 00:45:34.800 "dhchap_ctrlr_key": "key0", 00:45:34.800 "method": "bdev_nvme_set_keys", 00:45:34.800 "req_id": 1 00:45:34.800 } 00:45:34.800 Got JSON-RPC error response 00:45:34.800 response: 00:45:34.800 { 00:45:34.800 "code": -13, 00:45:34.800 "message": "Permission denied" 00:45:34.800 } 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:45:34.800 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:35.057 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:45:35.057 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:45:35.990 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:45:35.991 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:45:35.991 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:36.555 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 636278 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 636278 ']' 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 636278 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636278 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636278' 00:45:36.556 killing process with pid 636278 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 636278 00:45:36.556 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 636278 00:45:36.814 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:45:36.814 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:36.814 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:45:36.814 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:36.814 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:45:36.814 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:36.814 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:36.814 rmmod nvme_tcp 00:45:36.814 rmmod nvme_fabrics 00:45:36.814 rmmod nvme_keyring 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 659022 ']' 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 659022 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 659022 ']' 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 659022 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659022 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659022' 00:45:37.077 killing process with pid 659022 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 659022 00:45:37.077 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 659022 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:37.337 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wJe /tmp/spdk.key-sha256.seS /tmp/spdk.key-sha384.j0e /tmp/spdk.key-sha512.RxH /tmp/spdk.key-sha512.MkE /tmp/spdk.key-sha384.b9b /tmp/spdk.key-sha256.OSW '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:45:39.242 00:45:39.242 real 3m30.963s 00:45:39.242 user 8m15.563s 00:45:39.242 sys 0m27.731s 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:39.242 ************************************ 00:45:39.242 END TEST nvmf_auth_target 00:45:39.242 ************************************ 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:39.242 ************************************ 00:45:39.242 START TEST nvmf_bdevio_no_huge 00:45:39.242 ************************************ 00:45:39.242 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:39.501 * Looking for test storage... 00:45:39.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:45:39.501 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.502 --rc genhtml_branch_coverage=1 00:45:39.502 --rc genhtml_function_coverage=1 00:45:39.502 --rc genhtml_legend=1 00:45:39.502 --rc geninfo_all_blocks=1 00:45:39.502 --rc geninfo_unexecuted_blocks=1 00:45:39.502 00:45:39.502 ' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.502 --rc genhtml_branch_coverage=1 00:45:39.502 --rc genhtml_function_coverage=1 00:45:39.502 --rc genhtml_legend=1 00:45:39.502 --rc geninfo_all_blocks=1 00:45:39.502 --rc geninfo_unexecuted_blocks=1 00:45:39.502 00:45:39.502 ' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.502 --rc genhtml_branch_coverage=1 00:45:39.502 --rc genhtml_function_coverage=1 00:45:39.502 --rc genhtml_legend=1 00:45:39.502 --rc geninfo_all_blocks=1 00:45:39.502 --rc geninfo_unexecuted_blocks=1 00:45:39.502 00:45:39.502 ' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:39.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:39.502 --rc genhtml_branch_coverage=1 00:45:39.502 --rc genhtml_function_coverage=1 00:45:39.502 --rc genhtml_legend=1 00:45:39.502 --rc geninfo_all_blocks=1 00:45:39.502 --rc geninfo_unexecuted_blocks=1 00:45:39.502 00:45:39.502 ' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:39.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:39.502 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:39.503 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.503 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:39.503 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:39.503 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:45:39.503 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:45:42.033 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:45:42.033 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:45:42.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:45:42.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:42.033 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:42.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:42.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:45:42.034 00:45:42.034 --- 10.0.0.2 ping statistics --- 00:45:42.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:42.034 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:42.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:42.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:45:42.034 00:45:42.034 --- 10.0.0.1 ping statistics --- 00:45:42.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:42.034 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=664281 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 664281 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 664281 ']' 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:42.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:42.034 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.034 [2024-12-09 05:40:35.954115] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:45:42.034 [2024-12-09 05:40:35.954210] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:45:42.034 [2024-12-09 05:40:36.037413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:42.034 [2024-12-09 05:40:36.097520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:42.034 [2024-12-09 05:40:36.097605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:42.034 [2024-12-09 05:40:36.097620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:42.034 [2024-12-09 05:40:36.097631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:42.034 [2024-12-09 05:40:36.097640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:42.034 [2024-12-09 05:40:36.098751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:45:42.034 [2024-12-09 05:40:36.098811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:45:42.034 [2024-12-09 05:40:36.098877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:45:42.034 [2024-12-09 05:40:36.098880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.034 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.034 [2024-12-09 05:40:36.255782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.292 Malloc0 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.292 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.293 [2024-12-09 05:40:36.293808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:42.293 { 00:45:42.293 "params": { 00:45:42.293 "name": "Nvme$subsystem", 00:45:42.293 "trtype": "$TEST_TRANSPORT", 00:45:42.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:42.293 "adrfam": "ipv4", 00:45:42.293 "trsvcid": "$NVMF_PORT", 00:45:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:42.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:42.293 "hdgst": ${hdgst:-false}, 00:45:42.293 "ddgst": ${ddgst:-false} 00:45:42.293 }, 00:45:42.293 "method": "bdev_nvme_attach_controller" 00:45:42.293 } 00:45:42.293 EOF 00:45:42.293 )") 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:45:42.293 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:42.293 "params": { 00:45:42.293 "name": "Nvme1", 00:45:42.293 "trtype": "tcp", 00:45:42.293 "traddr": "10.0.0.2", 00:45:42.293 "adrfam": "ipv4", 00:45:42.293 "trsvcid": "4420", 00:45:42.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:42.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:42.293 "hdgst": false, 00:45:42.293 "ddgst": false 00:45:42.293 }, 00:45:42.293 "method": "bdev_nvme_attach_controller" 00:45:42.293 }' 00:45:42.293 [2024-12-09 05:40:36.344764] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:45:42.293 [2024-12-09 05:40:36.344854] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid664426 ] 00:45:42.293 [2024-12-09 05:40:36.418351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:42.293 [2024-12-09 05:40:36.484561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:42.293 [2024-12-09 05:40:36.484613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:42.293 [2024-12-09 05:40:36.484617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.551 I/O targets: 00:45:42.551 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:42.551 00:45:42.551 00:45:42.551 CUnit - A unit testing framework for C - Version 2.1-3 00:45:42.551 http://cunit.sourceforge.net/ 00:45:42.551 00:45:42.551 00:45:42.551 Suite: bdevio tests on: Nvme1n1 00:45:42.551 Test: blockdev write read block ...passed 00:45:42.551 Test: blockdev write zeroes read block ...passed 00:45:42.551 Test: blockdev write zeroes read no split ...passed 00:45:42.809 Test: blockdev write zeroes read split ...passed 00:45:42.809 Test: blockdev write zeroes read split partial ...passed 00:45:42.809 Test: blockdev reset ...[2024-12-09 05:40:36.797913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:45:42.809 [2024-12-09 05:40:36.798022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fc4e0 (9): Bad file descriptor 00:45:42.809 [2024-12-09 05:40:36.817903] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:45:42.809 passed 00:45:42.809 Test: blockdev write read 8 blocks ...passed 00:45:42.809 Test: blockdev write read size > 128k ...passed 00:45:42.809 Test: blockdev write read invalid size ...passed 00:45:42.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:42.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:42.809 Test: blockdev write read max offset ...passed 00:45:42.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:42.809 Test: blockdev writev readv 8 blocks ...passed 00:45:43.066 Test: blockdev writev readv 30 x 1block ...passed 00:45:43.066 Test: blockdev writev readv block ...passed 00:45:43.066 Test: blockdev writev readv size > 128k ...passed 00:45:43.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:43.066 Test: blockdev comparev and writev ...[2024-12-09 05:40:37.113359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.113397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.113422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.113441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.113750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.113774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.113798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.113816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.114122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.114146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.114168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.114186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.114578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.114608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.066 [2024-12-09 05:40:37.114627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:43.066 passed 00:45:43.066 Test: blockdev nvme passthru rw ...passed 00:45:43.066 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:40:37.198496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.066 [2024-12-09 05:40:37.198523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.198664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.066 [2024-12-09 05:40:37.198688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.198821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.066 [2024-12-09 05:40:37.198845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:43.066 [2024-12-09 05:40:37.198976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.066 [2024-12-09 05:40:37.199000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:43.066 passed 00:45:43.066 Test: blockdev nvme admin passthru ...passed 00:45:43.066 Test: blockdev copy ...passed 00:45:43.066 00:45:43.066 Run Summary: Type Total Ran Passed Failed Inactive 00:45:43.066 suites 1 1 n/a 0 0 00:45:43.066 tests 23 23 23 0 0 00:45:43.066 asserts 152 152 152 0 n/a 00:45:43.066 00:45:43.066 Elapsed time = 1.141 seconds 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:43.640 rmmod nvme_tcp 00:45:43.640 rmmod nvme_fabrics 00:45:43.640 rmmod nvme_keyring 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 664281 ']' 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 664281 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 664281 ']' 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 664281 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664281 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664281' 00:45:43.640 killing process with pid 664281 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 664281 00:45:43.640 05:40:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 664281 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:45:43.901 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:44.159 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:44.159 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:44.159 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:44.159 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:46.060 00:45:46.060 real 0m6.717s 00:45:46.060 user 0m10.690s 00:45:46.060 sys 0m2.639s 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:46.060 ************************************ 00:45:46.060 END TEST nvmf_bdevio_no_huge 00:45:46.060 ************************************ 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:45:46.060 ************************************ 00:45:46.060 START TEST nvmf_tls 00:45:46.060 ************************************ 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:46.060 * Looking for test storage... 00:45:46.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:45:46.060 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:46.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:46.319 --rc genhtml_branch_coverage=1 00:45:46.319 --rc genhtml_function_coverage=1 00:45:46.319 --rc genhtml_legend=1 00:45:46.319 --rc geninfo_all_blocks=1 00:45:46.319 --rc geninfo_unexecuted_blocks=1 00:45:46.319 00:45:46.319 ' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:46.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:46.319 --rc genhtml_branch_coverage=1 00:45:46.319 --rc genhtml_function_coverage=1 00:45:46.319 --rc genhtml_legend=1 00:45:46.319 --rc geninfo_all_blocks=1 00:45:46.319 --rc geninfo_unexecuted_blocks=1 00:45:46.319 00:45:46.319 ' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:46.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:46.319 --rc genhtml_branch_coverage=1 00:45:46.319 --rc genhtml_function_coverage=1 00:45:46.319 --rc genhtml_legend=1 00:45:46.319 --rc geninfo_all_blocks=1 00:45:46.319 --rc geninfo_unexecuted_blocks=1 00:45:46.319 00:45:46.319 ' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:46.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:46.319 --rc genhtml_branch_coverage=1 00:45:46.319 --rc genhtml_function_coverage=1 00:45:46.319 --rc genhtml_legend=1 00:45:46.319 --rc geninfo_all_blocks=1 00:45:46.319 --rc geninfo_unexecuted_blocks=1 00:45:46.319 00:45:46.319 ' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:46.319 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:46.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:45:46.320 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:45:48.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:45:48.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:45:48.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:48.853 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:45:48.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:48.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:48.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:45:48.854 00:45:48.854 --- 10.0.0.2 ping statistics --- 00:45:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:48.854 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:48.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:48.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:45:48.854 00:45:48.854 --- 10.0.0.1 ping statistics --- 00:45:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:48.854 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=666507 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 666507 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 666507 ']' 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:48.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:48.854 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:48.854 [2024-12-09 05:40:42.787712] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:45:48.854 [2024-12-09 05:40:42.787802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:48.854 [2024-12-09 05:40:42.864042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:48.854 [2024-12-09 05:40:42.922469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:48.854 [2024-12-09 05:40:42.922536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:48.854 [2024-12-09 05:40:42.922550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:48.854 [2024-12-09 05:40:42.922562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:48.854 [2024-12-09 05:40:42.922571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:48.854 [2024-12-09 05:40:42.923187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:45:48.854 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:45:49.112 true 00:45:49.370 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:49.370 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:45:49.628 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:45:49.628 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:45:49.628 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:49.886 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:49.886 05:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:45:50.144 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:45:50.144 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:45:50.144 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:45:50.402 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:50.402 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:45:50.660 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:45:50.660 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:45:50.660 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:50.660 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:45:50.918 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:45:50.918 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:45:50.918 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:45:51.175 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:51.175 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:45:51.432 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:45:51.432 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:45:51.432 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:45:51.689 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:51.689 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:45:51.947 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.A7EnXvIjPL 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.WwiEM5U51K 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.A7EnXvIjPL 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.WwiEM5U51K 00:45:52.205 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:52.463 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:45:52.721 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.A7EnXvIjPL 00:45:52.721 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A7EnXvIjPL 00:45:52.721 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:52.978 [2024-12-09 05:40:47.163521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:52.978 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:45:53.237 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:45:53.802 [2024-12-09 05:40:47.741063] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:53.802 [2024-12-09 05:40:47.741339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:53.802 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:45:54.060 malloc0 00:45:54.060 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:54.317 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A7EnXvIjPL 00:45:54.575 05:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:45:54.832 05:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.A7EnXvIjPL 00:46:07.021 Initializing NVMe Controllers 00:46:07.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:46:07.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:07.021 Initialization complete. Launching workers. 00:46:07.021 ======================================================== 00:46:07.021 Latency(us) 00:46:07.021 Device Information : IOPS MiB/s Average min max 00:46:07.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8463.66 33.06 7563.89 944.92 9399.35 00:46:07.021 ======================================================== 00:46:07.021 Total : 8463.66 33.06 7563.89 944.92 9399.35 00:46:07.021 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7EnXvIjPL 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A7EnXvIjPL 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=668543 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 668543 /var/tmp/bdevperf.sock 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 668543 ']' 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:07.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:07.021 [2024-12-09 05:40:59.202279] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:07.021 [2024-12-09 05:40:59.202358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668543 ] 00:46:07.021 [2024-12-09 05:40:59.268631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:07.021 [2024-12-09 05:40:59.328384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A7EnXvIjPL 00:46:07.021 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:07.021 [2024-12-09 05:40:59.998431] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:07.021 TLSTESTn1 00:46:07.021 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:07.021 Running I/O for 10 seconds... 00:46:08.393 3519.00 IOPS, 13.75 MiB/s [2024-12-09T04:41:03.550Z] 3536.00 IOPS, 13.81 MiB/s [2024-12-09T04:41:04.481Z] 3555.67 IOPS, 13.89 MiB/s [2024-12-09T04:41:05.413Z] 3559.50 IOPS, 13.90 MiB/s [2024-12-09T04:41:06.348Z] 3571.60 IOPS, 13.95 MiB/s [2024-12-09T04:41:07.278Z] 3566.00 IOPS, 13.93 MiB/s [2024-12-09T04:41:08.657Z] 3543.00 IOPS, 13.84 MiB/s [2024-12-09T04:41:09.292Z] 3541.88 IOPS, 13.84 MiB/s [2024-12-09T04:41:10.665Z] 3543.89 IOPS, 13.84 MiB/s [2024-12-09T04:41:10.665Z] 3544.20 IOPS, 13.84 MiB/s 00:46:16.440 Latency(us) 00:46:16.440 [2024-12-09T04:41:10.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:16.440 Verification LBA range: start 0x0 length 0x2000 00:46:16.440 TLSTESTn1 : 10.02 3548.93 13.86 0.00 0.00 36002.63 6213.78 29515.47 00:46:16.440 [2024-12-09T04:41:10.665Z] =================================================================================================================== 00:46:16.440 [2024-12-09T04:41:10.665Z] Total : 3548.93 13.86 0.00 0.00 36002.63 6213.78 29515.47 00:46:16.440 { 00:46:16.440 "results": [ 00:46:16.440 { 00:46:16.440 "job": "TLSTESTn1", 00:46:16.440 "core_mask": "0x4", 00:46:16.440 "workload": "verify", 00:46:16.440 "status": "finished", 00:46:16.440 "verify_range": { 00:46:16.440 "start": 0, 00:46:16.440 "length": 8192 00:46:16.440 }, 00:46:16.440 "queue_depth": 128, 00:46:16.440 "io_size": 4096, 00:46:16.440 "runtime": 10.022455, 00:46:16.440 "iops": 3548.930875718574, 00:46:16.440 "mibps": 13.86301123327568, 00:46:16.440 "io_failed": 0, 00:46:16.440 "io_timeout": 0, 00:46:16.440 "avg_latency_us": 36002.63328828787, 00:46:16.440 "min_latency_us": 6213.783703703703, 00:46:16.440 "max_latency_us": 29515.472592592592 00:46:16.440 } 00:46:16.440 ], 00:46:16.440 "core_count": 1 00:46:16.440 } 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 668543 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 668543 ']' 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 668543 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668543 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668543' 00:46:16.440 killing process with pid 668543 00:46:16.440 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 668543 00:46:16.440 Received shutdown signal, test time was about 10.000000 seconds 00:46:16.440 00:46:16.440 Latency(us) 00:46:16.440 [2024-12-09T04:41:10.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.441 [2024-12-09T04:41:10.666Z] =================================================================================================================== 00:46:16.441 [2024-12-09T04:41:10.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 668543 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WwiEM5U51K 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WwiEM5U51K 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WwiEM5U51K 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WwiEM5U51K 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=669873 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 669873 /var/tmp/bdevperf.sock 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 669873 ']' 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:16.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:16.441 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:16.441 [2024-12-09 05:41:10.632331] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:16.441 [2024-12-09 05:41:10.632431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669873 ] 00:46:16.699 [2024-12-09 05:41:10.698783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.699 [2024-12-09 05:41:10.754833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:16.699 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:16.699 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:16.699 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WwiEM5U51K 00:46:16.955 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:17.213 [2024-12-09 05:41:11.376049] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:17.213 [2024-12-09 05:41:11.383494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:17.213 [2024-12-09 05:41:11.383531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf542f0 (107): Transport endpoint is not connected 00:46:17.213 [2024-12-09 05:41:11.384508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf542f0 (9): Bad file descriptor 00:46:17.213 [2024-12-09 05:41:11.385507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:46:17.213 [2024-12-09 05:41:11.385531] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:17.213 [2024-12-09 05:41:11.385546] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:46:17.213 [2024-12-09 05:41:11.385576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:46:17.213 request: 00:46:17.213 { 00:46:17.213 "name": "TLSTEST", 00:46:17.213 "trtype": "tcp", 00:46:17.213 "traddr": "10.0.0.2", 00:46:17.213 "adrfam": "ipv4", 00:46:17.213 "trsvcid": "4420", 00:46:17.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:17.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:17.213 "prchk_reftag": false, 00:46:17.213 "prchk_guard": false, 00:46:17.213 "hdgst": false, 00:46:17.213 "ddgst": false, 00:46:17.213 "psk": "key0", 00:46:17.213 "allow_unrecognized_csi": false, 00:46:17.213 "method": "bdev_nvme_attach_controller", 00:46:17.213 "req_id": 1 00:46:17.213 } 00:46:17.213 Got JSON-RPC error response 00:46:17.213 response: 00:46:17.213 { 00:46:17.213 "code": -5, 00:46:17.213 "message": "Input/output error" 00:46:17.213 } 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 669873 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 669873 ']' 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 669873 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:17.213 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 669873 00:46:17.471 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:17.471 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:17.471 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 669873' 00:46:17.471 killing process with pid 669873 00:46:17.471 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 669873 00:46:17.471 Received shutdown signal, test time was about 10.000000 seconds 00:46:17.471 00:46:17.471 Latency(us) 00:46:17.471 [2024-12-09T04:41:11.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:17.471 [2024-12-09T04:41:11.696Z] =================================================================================================================== 00:46:17.471 [2024-12-09T04:41:11.696Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:17.471 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 669873 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7EnXvIjPL 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7EnXvIjPL 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A7EnXvIjPL 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A7EnXvIjPL 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=670014 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 670014 /var/tmp/bdevperf.sock 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 670014 ']' 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:17.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:17.729 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:17.729 [2024-12-09 05:41:11.749562] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:17.729 [2024-12-09 05:41:11.749664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670014 ] 00:46:17.729 [2024-12-09 05:41:11.815637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:17.729 [2024-12-09 05:41:11.870143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:17.987 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:17.987 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:17.987 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A7EnXvIjPL 00:46:18.244 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:46:18.501 [2024-12-09 05:41:12.484483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:18.501 [2024-12-09 05:41:12.496162] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:46:18.501 [2024-12-09 05:41:12.496191] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:46:18.501 [2024-12-09 05:41:12.496242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:18.501 [2024-12-09 05:41:12.496673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17312f0 (107): Transport endpoint is not connected 00:46:18.501 [2024-12-09 05:41:12.497652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17312f0 (9): Bad file descriptor 00:46:18.501 [2024-12-09 05:41:12.498651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:46:18.501 [2024-12-09 05:41:12.498672] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:18.501 [2024-12-09 05:41:12.498685] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:46:18.501 [2024-12-09 05:41:12.498699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:46:18.501 request: 00:46:18.501 { 00:46:18.501 "name": "TLSTEST", 00:46:18.501 "trtype": "tcp", 00:46:18.501 "traddr": "10.0.0.2", 00:46:18.501 "adrfam": "ipv4", 00:46:18.501 "trsvcid": "4420", 00:46:18.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:18.501 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:18.501 "prchk_reftag": false, 00:46:18.501 "prchk_guard": false, 00:46:18.501 "hdgst": false, 00:46:18.501 "ddgst": false, 00:46:18.501 "psk": "key0", 00:46:18.501 "allow_unrecognized_csi": false, 00:46:18.501 "method": "bdev_nvme_attach_controller", 00:46:18.501 "req_id": 1 00:46:18.501 } 00:46:18.501 Got JSON-RPC error response 00:46:18.501 response: 00:46:18.501 { 00:46:18.501 "code": -5, 00:46:18.501 "message": "Input/output error" 00:46:18.501 } 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 670014 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 670014 ']' 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 670014 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670014 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670014' 00:46:18.501 killing process with pid 670014 00:46:18.501 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 670014 00:46:18.501 Received shutdown signal, test time was about 10.000000 seconds 00:46:18.501 00:46:18.502 Latency(us) 00:46:18.502 [2024-12-09T04:41:12.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.502 [2024-12-09T04:41:12.727Z] =================================================================================================================== 00:46:18.502 [2024-12-09T04:41:12.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:18.502 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 670014 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7EnXvIjPL 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7EnXvIjPL 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:18.759 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A7EnXvIjPL 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A7EnXvIjPL 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=670155 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 670155 /var/tmp/bdevperf.sock 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 670155 ']' 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:18.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:18.760 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:18.760 [2024-12-09 05:41:12.867709] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:18.760 [2024-12-09 05:41:12.867808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670155 ] 00:46:18.760 [2024-12-09 05:41:12.934989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.018 [2024-12-09 05:41:12.994566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:19.018 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:19.018 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:19.018 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A7EnXvIjPL 00:46:19.274 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:19.532 [2024-12-09 05:41:13.613848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:19.532 [2024-12-09 05:41:13.619565] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:46:19.532 [2024-12-09 05:41:13.619597] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:46:19.532 [2024-12-09 05:41:13.619646] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:19.532 [2024-12-09 05:41:13.620146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7e2f0 (107): Transport endpoint is not connected 00:46:19.532 [2024-12-09 05:41:13.621136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7e2f0 (9): Bad file descriptor 00:46:19.532 [2024-12-09 05:41:13.622136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:46:19.532 [2024-12-09 05:41:13.622158] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:19.532 [2024-12-09 05:41:13.622172] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:46:19.532 [2024-12-09 05:41:13.622188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:46:19.532 request: 00:46:19.532 { 00:46:19.532 "name": "TLSTEST", 00:46:19.532 "trtype": "tcp", 00:46:19.532 "traddr": "10.0.0.2", 00:46:19.532 "adrfam": "ipv4", 00:46:19.532 "trsvcid": "4420", 00:46:19.532 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:19.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:19.532 "prchk_reftag": false, 00:46:19.532 "prchk_guard": false, 00:46:19.532 "hdgst": false, 00:46:19.532 "ddgst": false, 00:46:19.532 "psk": "key0", 00:46:19.532 "allow_unrecognized_csi": false, 00:46:19.532 "method": "bdev_nvme_attach_controller", 00:46:19.532 "req_id": 1 00:46:19.532 } 00:46:19.532 Got JSON-RPC error response 00:46:19.532 response: 00:46:19.532 { 00:46:19.532 "code": -5, 00:46:19.532 "message": "Input/output error" 00:46:19.532 } 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 670155 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 670155 ']' 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 670155 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670155 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670155' 00:46:19.532 killing process with pid 670155 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 670155 00:46:19.532 Received shutdown signal, test time was about 10.000000 seconds 00:46:19.532 00:46:19.532 Latency(us) 00:46:19.532 [2024-12-09T04:41:13.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:19.532 [2024-12-09T04:41:13.757Z] =================================================================================================================== 00:46:19.532 [2024-12-09T04:41:13.757Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:19.532 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 670155 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=670269 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 670269 /var/tmp/bdevperf.sock 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 670269 ']' 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:19.790 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:19.790 [2024-12-09 05:41:13.995531] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:19.790 [2024-12-09 05:41:13.995630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670269 ] 00:46:20.049 [2024-12-09 05:41:14.069747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:20.049 [2024-12-09 05:41:14.129197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:20.049 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:20.049 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:20.049 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:46:20.306 [2024-12-09 05:41:14.497683] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:46:20.306 [2024-12-09 05:41:14.497728] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:20.306 request: 00:46:20.306 { 00:46:20.306 "name": "key0", 00:46:20.306 "path": "", 00:46:20.306 "method": "keyring_file_add_key", 00:46:20.306 "req_id": 1 00:46:20.306 } 00:46:20.306 Got JSON-RPC error response 00:46:20.306 response: 00:46:20.306 { 00:46:20.306 "code": -1, 00:46:20.306 "message": "Operation not permitted" 00:46:20.306 } 00:46:20.306 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:20.564 [2024-12-09 05:41:14.758501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:20.564 [2024-12-09 05:41:14.758571] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:46:20.564 request: 00:46:20.564 { 00:46:20.564 "name": "TLSTEST", 00:46:20.564 "trtype": "tcp", 00:46:20.564 "traddr": "10.0.0.2", 00:46:20.564 "adrfam": "ipv4", 00:46:20.564 "trsvcid": "4420", 00:46:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:20.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:20.564 "prchk_reftag": false, 00:46:20.564 "prchk_guard": false, 00:46:20.564 "hdgst": false, 00:46:20.564 "ddgst": false, 00:46:20.564 "psk": "key0", 00:46:20.564 "allow_unrecognized_csi": false, 00:46:20.564 "method": "bdev_nvme_attach_controller", 00:46:20.564 "req_id": 1 00:46:20.564 } 00:46:20.564 Got JSON-RPC error response 00:46:20.564 response: 00:46:20.564 { 00:46:20.564 "code": -126, 00:46:20.564 "message": "Required key not available" 00:46:20.564 } 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 670269 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 670269 ']' 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 670269 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:20.564 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670269 00:46:20.822 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:20.822 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:20.822 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670269' 00:46:20.822 killing process with pid 670269 00:46:20.822 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 670269 00:46:20.822 Received shutdown signal, test time was about 10.000000 seconds 00:46:20.822 00:46:20.822 Latency(us) 00:46:20.822 [2024-12-09T04:41:15.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:20.822 [2024-12-09T04:41:15.047Z] =================================================================================================================== 00:46:20.822 [2024-12-09T04:41:15.047Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:20.822 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 670269 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 666507 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 666507 ']' 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 666507 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666507 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666507' 00:46:21.087 killing process with pid 666507 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 666507 00:46:21.087 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 666507 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ZUG3jTm7MP 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ZUG3jTm7MP 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=670458 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 670458 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 670458 ']' 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:21.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:21.347 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:21.347 [2024-12-09 05:41:15.476165] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:21.347 [2024-12-09 05:41:15.476262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:21.347 [2024-12-09 05:41:15.545417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:21.605 [2024-12-09 05:41:15.597669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:21.605 [2024-12-09 05:41:15.597724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:21.605 [2024-12-09 05:41:15.597747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:21.605 [2024-12-09 05:41:15.597758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:21.605 [2024-12-09 05:41:15.597767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:21.605 [2024-12-09 05:41:15.598315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZUG3jTm7MP 00:46:21.606 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:21.864 [2024-12-09 05:41:15.990011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:21.864 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:22.122 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:22.379 [2024-12-09 05:41:16.571602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:22.379 [2024-12-09 05:41:16.571870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:22.379 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:22.637 malloc0 00:46:22.637 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:23.203 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:23.461 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUG3jTm7MP 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZUG3jTm7MP 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=670745 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 670745 /var/tmp/bdevperf.sock 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 670745 ']' 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:23.720 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:23.720 [2024-12-09 05:41:17.770472] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:23.720 [2024-12-09 05:41:17.770556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670745 ] 00:46:23.720 [2024-12-09 05:41:17.836387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.720 [2024-12-09 05:41:17.896378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:23.979 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:23.979 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:23.979 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:24.237 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:24.496 [2024-12-09 05:41:18.533907] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:24.496 TLSTESTn1 00:46:24.496 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:24.754 Running I/O for 10 seconds... 00:46:26.622 3453.00 IOPS, 13.49 MiB/s [2024-12-09T04:41:21.779Z] 3481.50 IOPS, 13.60 MiB/s [2024-12-09T04:41:23.152Z] 3507.00 IOPS, 13.70 MiB/s [2024-12-09T04:41:24.084Z] 3513.25 IOPS, 13.72 MiB/s [2024-12-09T04:41:25.016Z] 3514.80 IOPS, 13.73 MiB/s [2024-12-09T04:41:25.944Z] 3521.67 IOPS, 13.76 MiB/s [2024-12-09T04:41:26.874Z] 3512.00 IOPS, 13.72 MiB/s [2024-12-09T04:41:27.806Z] 3509.88 IOPS, 13.71 MiB/s [2024-12-09T04:41:29.181Z] 3514.56 IOPS, 13.73 MiB/s [2024-12-09T04:41:29.181Z] 3518.60 IOPS, 13.74 MiB/s 00:46:34.956 Latency(us) 00:46:34.956 [2024-12-09T04:41:29.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:34.956 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:34.956 Verification LBA range: start 0x0 length 0x2000 00:46:34.956 TLSTESTn1 : 10.02 3523.94 13.77 0.00 0.00 36261.54 6213.78 33787.45 00:46:34.956 [2024-12-09T04:41:29.181Z] =================================================================================================================== 00:46:34.956 [2024-12-09T04:41:29.181Z] Total : 3523.94 13.77 0.00 0.00 36261.54 6213.78 33787.45 00:46:34.956 { 00:46:34.956 "results": [ 00:46:34.956 { 00:46:34.956 "job": "TLSTESTn1", 00:46:34.956 "core_mask": "0x4", 00:46:34.956 "workload": "verify", 00:46:34.956 "status": "finished", 00:46:34.956 "verify_range": { 00:46:34.956 "start": 0, 00:46:34.956 "length": 8192 00:46:34.956 }, 00:46:34.956 "queue_depth": 128, 00:46:34.956 "io_size": 4096, 00:46:34.956 "runtime": 10.020319, 00:46:34.956 "iops": 3523.9397069095307, 00:46:34.956 "mibps": 13.765389480115354, 00:46:34.956 "io_failed": 0, 00:46:34.956 "io_timeout": 0, 00:46:34.956 "avg_latency_us": 36261.543782831286, 00:46:34.956 "min_latency_us": 6213.783703703703, 00:46:34.956 "max_latency_us": 33787.44888888889 00:46:34.956 } 00:46:34.956 ], 00:46:34.956 "core_count": 1 00:46:34.956 } 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 670745 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 670745 ']' 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 670745 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670745 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670745' 00:46:34.956 killing process with pid 670745 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 670745 00:46:34.956 Received shutdown signal, test time was about 10.000000 seconds 00:46:34.956 00:46:34.956 Latency(us) 00:46:34.956 [2024-12-09T04:41:29.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:34.956 [2024-12-09T04:41:29.181Z] =================================================================================================================== 00:46:34.956 [2024-12-09T04:41:29.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:34.956 05:41:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 670745 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ZUG3jTm7MP 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUG3jTm7MP 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUG3jTm7MP 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZUG3jTm7MP 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZUG3jTm7MP 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=672062 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:34.956 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 672062 /var/tmp/bdevperf.sock 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 672062 ']' 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:34.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:34.957 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:34.957 [2024-12-09 05:41:29.156033] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:34.957 [2024-12-09 05:41:29.156124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672062 ] 00:46:35.215 [2024-12-09 05:41:29.222862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:35.215 [2024-12-09 05:41:29.278456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:35.215 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:35.215 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:35.215 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:35.474 [2024-12-09 05:41:29.629587] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZUG3jTm7MP': 0100666 00:46:35.474 [2024-12-09 05:41:29.629623] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:35.474 request: 00:46:35.474 { 00:46:35.474 "name": "key0", 00:46:35.474 "path": "/tmp/tmp.ZUG3jTm7MP", 00:46:35.474 "method": "keyring_file_add_key", 00:46:35.474 "req_id": 1 00:46:35.474 } 00:46:35.474 Got JSON-RPC error response 00:46:35.474 response: 00:46:35.474 { 00:46:35.474 "code": -1, 00:46:35.474 "message": "Operation not permitted" 00:46:35.474 } 00:46:35.474 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:35.733 [2024-12-09 05:41:29.906445] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:35.733 [2024-12-09 05:41:29.906513] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:46:35.733 request: 00:46:35.733 { 00:46:35.733 "name": "TLSTEST", 00:46:35.733 "trtype": "tcp", 00:46:35.733 "traddr": "10.0.0.2", 00:46:35.733 "adrfam": "ipv4", 00:46:35.733 "trsvcid": "4420", 00:46:35.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:35.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:35.733 "prchk_reftag": false, 00:46:35.733 "prchk_guard": false, 00:46:35.733 "hdgst": false, 00:46:35.733 "ddgst": false, 00:46:35.733 "psk": "key0", 00:46:35.733 "allow_unrecognized_csi": false, 00:46:35.733 "method": "bdev_nvme_attach_controller", 00:46:35.733 "req_id": 1 00:46:35.733 } 00:46:35.733 Got JSON-RPC error response 00:46:35.733 response: 00:46:35.733 { 00:46:35.733 "code": -126, 00:46:35.733 "message": "Required key not available" 00:46:35.733 } 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 672062 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 672062 ']' 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 672062 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:35.733 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672062 00:46:35.991 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:35.991 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:35.991 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672062' 00:46:35.991 killing process with pid 672062 00:46:35.991 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 672062 00:46:35.991 Received shutdown signal, test time was about 10.000000 seconds 00:46:35.991 00:46:35.991 Latency(us) 00:46:35.991 [2024-12-09T04:41:30.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:35.991 [2024-12-09T04:41:30.216Z] =================================================================================================================== 00:46:35.991 [2024-12-09T04:41:30.216Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:35.991 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 672062 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 670458 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 670458 ']' 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 670458 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670458 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670458' 00:46:36.249 killing process with pid 670458 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 670458 00:46:36.249 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 670458 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=672218 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 672218 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 672218 ']' 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:36.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:36.508 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.508 [2024-12-09 05:41:30.590199] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:36.508 [2024-12-09 05:41:30.590302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:36.508 [2024-12-09 05:41:30.660814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.508 [2024-12-09 05:41:30.710643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:36.508 [2024-12-09 05:41:30.710705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:36.508 [2024-12-09 05:41:30.710728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:36.508 [2024-12-09 05:41:30.710739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:36.508 [2024-12-09 05:41:30.710747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:36.508 [2024-12-09 05:41:30.711310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZUG3jTm7MP 00:46:36.766 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:37.024 [2024-12-09 05:41:31.109358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:37.024 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:37.282 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:37.540 [2024-12-09 05:41:31.650829] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:37.540 [2024-12-09 05:41:31.651128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:37.540 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:37.799 malloc0 00:46:37.799 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:38.057 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:38.315 [2024-12-09 05:41:32.448250] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZUG3jTm7MP': 0100666 00:46:38.315 [2024-12-09 05:41:32.448331] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:38.315 request: 00:46:38.315 { 00:46:38.315 "name": "key0", 00:46:38.315 "path": "/tmp/tmp.ZUG3jTm7MP", 00:46:38.315 "method": "keyring_file_add_key", 00:46:38.315 "req_id": 1 00:46:38.315 } 00:46:38.315 Got JSON-RPC error response 00:46:38.315 response: 00:46:38.315 { 00:46:38.315 "code": -1, 00:46:38.315 "message": "Operation not permitted" 00:46:38.315 } 00:46:38.315 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:38.573 [2024-12-09 05:41:32.721020] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:46:38.573 [2024-12-09 05:41:32.721085] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:46:38.573 request: 00:46:38.573 { 00:46:38.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:38.573 "host": "nqn.2016-06.io.spdk:host1", 00:46:38.573 "psk": "key0", 00:46:38.573 "method": "nvmf_subsystem_add_host", 00:46:38.573 "req_id": 1 00:46:38.573 } 00:46:38.573 Got JSON-RPC error response 00:46:38.573 response: 00:46:38.573 { 00:46:38.573 "code": -32603, 00:46:38.573 "message": "Internal error" 00:46:38.573 } 00:46:38.573 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:46:38.573 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:38.573 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 672218 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 672218 ']' 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 672218 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672218 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672218' 00:46:38.574 killing process with pid 672218 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 672218 00:46:38.574 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 672218 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ZUG3jTm7MP 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=672513 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 672513 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 672513 ']' 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:38.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:38.831 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:39.089 [2024-12-09 05:41:33.077516] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:39.089 [2024-12-09 05:41:33.077606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:39.089 [2024-12-09 05:41:33.152516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:39.089 [2024-12-09 05:41:33.210308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:39.089 [2024-12-09 05:41:33.210398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:39.089 [2024-12-09 05:41:33.210412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:39.089 [2024-12-09 05:41:33.210424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:39.089 [2024-12-09 05:41:33.210434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:39.089 [2024-12-09 05:41:33.211008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZUG3jTm7MP 00:46:39.346 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:39.603 [2024-12-09 05:41:33.605311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:39.603 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:39.860 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:40.117 [2024-12-09 05:41:34.122671] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:40.117 [2024-12-09 05:41:34.122921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:40.117 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:40.374 malloc0 00:46:40.374 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:40.631 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:40.888 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=672799 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 672799 /var/tmp/bdevperf.sock 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 672799 ']' 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:41.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:41.145 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:41.145 [2024-12-09 05:41:35.260730] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:41.145 [2024-12-09 05:41:35.260815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672799 ] 00:46:41.145 [2024-12-09 05:41:35.326442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.402 [2024-12-09 05:41:35.385958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:41.402 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:41.402 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:41.402 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:41.659 05:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:46:41.916 [2024-12-09 05:41:36.016802] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:41.916 TLSTESTn1 00:46:41.916 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:46:42.480 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:46:42.480 "subsystems": [ 00:46:42.480 { 00:46:42.480 "subsystem": "keyring", 00:46:42.480 "config": [ 00:46:42.480 { 00:46:42.480 "method": "keyring_file_add_key", 00:46:42.480 "params": { 00:46:42.480 "name": "key0", 00:46:42.480 "path": "/tmp/tmp.ZUG3jTm7MP" 00:46:42.480 } 00:46:42.480 } 00:46:42.480 ] 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "subsystem": "iobuf", 00:46:42.480 "config": [ 00:46:42.480 { 00:46:42.480 "method": "iobuf_set_options", 00:46:42.480 "params": { 00:46:42.480 "small_pool_count": 8192, 00:46:42.480 "large_pool_count": 1024, 00:46:42.480 "small_bufsize": 8192, 00:46:42.480 "large_bufsize": 135168, 00:46:42.480 "enable_numa": false 00:46:42.480 } 00:46:42.480 } 00:46:42.480 ] 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "subsystem": "sock", 00:46:42.480 "config": [ 00:46:42.480 { 00:46:42.480 "method": "sock_set_default_impl", 00:46:42.480 "params": { 00:46:42.480 "impl_name": "posix" 00:46:42.480 } 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "method": "sock_impl_set_options", 00:46:42.480 "params": { 00:46:42.480 "impl_name": "ssl", 00:46:42.480 "recv_buf_size": 4096, 00:46:42.480 "send_buf_size": 4096, 00:46:42.480 "enable_recv_pipe": true, 00:46:42.480 "enable_quickack": false, 00:46:42.480 "enable_placement_id": 0, 00:46:42.480 "enable_zerocopy_send_server": true, 00:46:42.480 "enable_zerocopy_send_client": false, 00:46:42.480 "zerocopy_threshold": 0, 00:46:42.480 "tls_version": 0, 00:46:42.480 "enable_ktls": false 00:46:42.480 } 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "method": "sock_impl_set_options", 00:46:42.480 "params": { 00:46:42.480 "impl_name": "posix", 00:46:42.480 "recv_buf_size": 2097152, 00:46:42.480 "send_buf_size": 2097152, 00:46:42.480 "enable_recv_pipe": true, 00:46:42.480 "enable_quickack": false, 00:46:42.480 "enable_placement_id": 0, 00:46:42.480 "enable_zerocopy_send_server": true, 00:46:42.480 "enable_zerocopy_send_client": false, 00:46:42.480 "zerocopy_threshold": 0, 00:46:42.480 "tls_version": 0, 00:46:42.480 "enable_ktls": false 00:46:42.480 } 00:46:42.480 } 00:46:42.480 ] 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "subsystem": "vmd", 00:46:42.480 "config": [] 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "subsystem": "accel", 00:46:42.480 "config": [ 00:46:42.480 { 00:46:42.480 "method": "accel_set_options", 00:46:42.480 "params": { 00:46:42.480 "small_cache_size": 128, 00:46:42.480 "large_cache_size": 16, 00:46:42.480 "task_count": 2048, 00:46:42.480 "sequence_count": 2048, 00:46:42.480 "buf_count": 2048 00:46:42.480 } 00:46:42.480 } 00:46:42.480 ] 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "subsystem": "bdev", 00:46:42.480 "config": [ 00:46:42.480 { 00:46:42.480 "method": "bdev_set_options", 00:46:42.480 "params": { 00:46:42.480 "bdev_io_pool_size": 65535, 00:46:42.480 "bdev_io_cache_size": 256, 00:46:42.480 "bdev_auto_examine": true, 00:46:42.480 "iobuf_small_cache_size": 128, 00:46:42.480 "iobuf_large_cache_size": 16 00:46:42.480 } 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "method": "bdev_raid_set_options", 00:46:42.480 "params": { 00:46:42.480 "process_window_size_kb": 1024, 00:46:42.480 "process_max_bandwidth_mb_sec": 0 00:46:42.480 } 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "method": "bdev_iscsi_set_options", 00:46:42.480 "params": { 00:46:42.480 "timeout_sec": 30 00:46:42.480 } 00:46:42.480 }, 00:46:42.480 { 00:46:42.480 "method": "bdev_nvme_set_options", 00:46:42.480 "params": { 00:46:42.480 "action_on_timeout": "none", 00:46:42.480 "timeout_us": 0, 00:46:42.480 "timeout_admin_us": 0, 00:46:42.480 "keep_alive_timeout_ms": 10000, 00:46:42.480 "arbitration_burst": 0, 00:46:42.480 "low_priority_weight": 0, 00:46:42.480 "medium_priority_weight": 0, 00:46:42.480 "high_priority_weight": 0, 00:46:42.480 "nvme_adminq_poll_period_us": 10000, 00:46:42.480 "nvme_ioq_poll_period_us": 0, 00:46:42.480 "io_queue_requests": 0, 00:46:42.480 "delay_cmd_submit": true, 00:46:42.480 "transport_retry_count": 4, 00:46:42.480 "bdev_retry_count": 3, 00:46:42.480 "transport_ack_timeout": 0, 00:46:42.480 "ctrlr_loss_timeout_sec": 0, 00:46:42.480 "reconnect_delay_sec": 0, 00:46:42.480 "fast_io_fail_timeout_sec": 0, 00:46:42.480 "disable_auto_failback": false, 00:46:42.480 "generate_uuids": false, 00:46:42.480 "transport_tos": 0, 00:46:42.480 "nvme_error_stat": false, 00:46:42.480 "rdma_srq_size": 0, 00:46:42.480 "io_path_stat": false, 00:46:42.480 "allow_accel_sequence": false, 00:46:42.480 "rdma_max_cq_size": 0, 00:46:42.480 "rdma_cm_event_timeout_ms": 0, 00:46:42.480 "dhchap_digests": [ 00:46:42.480 "sha256", 00:46:42.480 "sha384", 00:46:42.480 "sha512" 00:46:42.480 ], 00:46:42.480 "dhchap_dhgroups": [ 00:46:42.480 "null", 00:46:42.481 "ffdhe2048", 00:46:42.481 "ffdhe3072", 00:46:42.481 "ffdhe4096", 00:46:42.481 "ffdhe6144", 00:46:42.481 "ffdhe8192" 00:46:42.481 ] 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "bdev_nvme_set_hotplug", 00:46:42.481 "params": { 00:46:42.481 "period_us": 100000, 00:46:42.481 "enable": false 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "bdev_malloc_create", 00:46:42.481 "params": { 00:46:42.481 "name": "malloc0", 00:46:42.481 "num_blocks": 8192, 00:46:42.481 "block_size": 4096, 00:46:42.481 "physical_block_size": 4096, 00:46:42.481 "uuid": "d31fb2da-ed8c-48d5-8d42-6d0c13558151", 00:46:42.481 "optimal_io_boundary": 0, 00:46:42.481 "md_size": 0, 00:46:42.481 "dif_type": 0, 00:46:42.481 "dif_is_head_of_md": false, 00:46:42.481 "dif_pi_format": 0 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "bdev_wait_for_examine" 00:46:42.481 } 00:46:42.481 ] 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "subsystem": "nbd", 00:46:42.481 "config": [] 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "subsystem": "scheduler", 00:46:42.481 "config": [ 00:46:42.481 { 00:46:42.481 "method": "framework_set_scheduler", 00:46:42.481 "params": { 00:46:42.481 "name": "static" 00:46:42.481 } 00:46:42.481 } 00:46:42.481 ] 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "subsystem": "nvmf", 00:46:42.481 "config": [ 00:46:42.481 { 00:46:42.481 "method": "nvmf_set_config", 00:46:42.481 "params": { 00:46:42.481 "discovery_filter": "match_any", 00:46:42.481 "admin_cmd_passthru": { 00:46:42.481 "identify_ctrlr": false 00:46:42.481 }, 00:46:42.481 "dhchap_digests": [ 00:46:42.481 "sha256", 00:46:42.481 "sha384", 00:46:42.481 "sha512" 00:46:42.481 ], 00:46:42.481 "dhchap_dhgroups": [ 00:46:42.481 "null", 00:46:42.481 "ffdhe2048", 00:46:42.481 "ffdhe3072", 00:46:42.481 "ffdhe4096", 00:46:42.481 "ffdhe6144", 00:46:42.481 "ffdhe8192" 00:46:42.481 ] 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_set_max_subsystems", 00:46:42.481 "params": { 00:46:42.481 "max_subsystems": 1024 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_set_crdt", 00:46:42.481 "params": { 00:46:42.481 "crdt1": 0, 00:46:42.481 "crdt2": 0, 00:46:42.481 "crdt3": 0 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_create_transport", 00:46:42.481 "params": { 00:46:42.481 "trtype": "TCP", 00:46:42.481 "max_queue_depth": 128, 00:46:42.481 "max_io_qpairs_per_ctrlr": 127, 00:46:42.481 "in_capsule_data_size": 4096, 00:46:42.481 "max_io_size": 131072, 00:46:42.481 "io_unit_size": 131072, 00:46:42.481 "max_aq_depth": 128, 00:46:42.481 "num_shared_buffers": 511, 00:46:42.481 "buf_cache_size": 4294967295, 00:46:42.481 "dif_insert_or_strip": false, 00:46:42.481 "zcopy": false, 00:46:42.481 "c2h_success": false, 00:46:42.481 "sock_priority": 0, 00:46:42.481 "abort_timeout_sec": 1, 00:46:42.481 "ack_timeout": 0, 00:46:42.481 "data_wr_pool_size": 0 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_create_subsystem", 00:46:42.481 "params": { 00:46:42.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:42.481 "allow_any_host": false, 00:46:42.481 "serial_number": "SPDK00000000000001", 00:46:42.481 "model_number": "SPDK bdev Controller", 00:46:42.481 "max_namespaces": 10, 00:46:42.481 "min_cntlid": 1, 00:46:42.481 "max_cntlid": 65519, 00:46:42.481 "ana_reporting": false 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_subsystem_add_host", 00:46:42.481 "params": { 00:46:42.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:42.481 "host": "nqn.2016-06.io.spdk:host1", 00:46:42.481 "psk": "key0" 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_subsystem_add_ns", 00:46:42.481 "params": { 00:46:42.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:42.481 "namespace": { 00:46:42.481 "nsid": 1, 00:46:42.481 "bdev_name": "malloc0", 00:46:42.481 "nguid": "D31FB2DAED8C48D58D426D0C13558151", 00:46:42.481 "uuid": "d31fb2da-ed8c-48d5-8d42-6d0c13558151", 00:46:42.481 "no_auto_visible": false 00:46:42.481 } 00:46:42.481 } 00:46:42.481 }, 00:46:42.481 { 00:46:42.481 "method": "nvmf_subsystem_add_listener", 00:46:42.481 "params": { 00:46:42.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:42.481 "listen_address": { 00:46:42.481 "trtype": "TCP", 00:46:42.481 "adrfam": "IPv4", 00:46:42.481 "traddr": "10.0.0.2", 00:46:42.481 "trsvcid": "4420" 00:46:42.481 }, 00:46:42.481 "secure_channel": true 00:46:42.481 } 00:46:42.481 } 00:46:42.481 ] 00:46:42.481 } 00:46:42.481 ] 00:46:42.481 }' 00:46:42.481 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:46:42.752 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:46:42.752 "subsystems": [ 00:46:42.752 { 00:46:42.752 "subsystem": "keyring", 00:46:42.752 "config": [ 00:46:42.752 { 00:46:42.752 "method": "keyring_file_add_key", 00:46:42.752 "params": { 00:46:42.752 "name": "key0", 00:46:42.752 "path": "/tmp/tmp.ZUG3jTm7MP" 00:46:42.752 } 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "iobuf", 00:46:42.752 "config": [ 00:46:42.752 { 00:46:42.752 "method": "iobuf_set_options", 00:46:42.752 "params": { 00:46:42.752 "small_pool_count": 8192, 00:46:42.752 "large_pool_count": 1024, 00:46:42.752 "small_bufsize": 8192, 00:46:42.752 "large_bufsize": 135168, 00:46:42.752 "enable_numa": false 00:46:42.752 } 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "sock", 00:46:42.752 "config": [ 00:46:42.752 { 00:46:42.752 "method": "sock_set_default_impl", 00:46:42.752 "params": { 00:46:42.752 "impl_name": "posix" 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "sock_impl_set_options", 00:46:42.752 "params": { 00:46:42.752 "impl_name": "ssl", 00:46:42.752 "recv_buf_size": 4096, 00:46:42.752 "send_buf_size": 4096, 00:46:42.752 "enable_recv_pipe": true, 00:46:42.752 "enable_quickack": false, 00:46:42.752 "enable_placement_id": 0, 00:46:42.752 "enable_zerocopy_send_server": true, 00:46:42.752 "enable_zerocopy_send_client": false, 00:46:42.752 "zerocopy_threshold": 0, 00:46:42.752 "tls_version": 0, 00:46:42.752 "enable_ktls": false 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "sock_impl_set_options", 00:46:42.752 "params": { 00:46:42.752 "impl_name": "posix", 00:46:42.752 "recv_buf_size": 2097152, 00:46:42.752 "send_buf_size": 2097152, 00:46:42.752 "enable_recv_pipe": true, 00:46:42.752 "enable_quickack": false, 00:46:42.752 "enable_placement_id": 0, 00:46:42.752 "enable_zerocopy_send_server": true, 00:46:42.752 "enable_zerocopy_send_client": false, 00:46:42.752 "zerocopy_threshold": 0, 00:46:42.752 "tls_version": 0, 00:46:42.752 "enable_ktls": false 00:46:42.752 } 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "vmd", 00:46:42.752 "config": [] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "accel", 00:46:42.752 "config": [ 00:46:42.752 { 00:46:42.752 "method": "accel_set_options", 00:46:42.752 "params": { 00:46:42.752 "small_cache_size": 128, 00:46:42.752 "large_cache_size": 16, 00:46:42.752 "task_count": 2048, 00:46:42.752 "sequence_count": 2048, 00:46:42.752 "buf_count": 2048 00:46:42.752 } 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "bdev", 00:46:42.752 "config": [ 00:46:42.752 { 00:46:42.752 "method": "bdev_set_options", 00:46:42.752 "params": { 00:46:42.752 "bdev_io_pool_size": 65535, 00:46:42.752 "bdev_io_cache_size": 256, 00:46:42.752 "bdev_auto_examine": true, 00:46:42.752 "iobuf_small_cache_size": 128, 00:46:42.752 "iobuf_large_cache_size": 16 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_raid_set_options", 00:46:42.752 "params": { 00:46:42.752 "process_window_size_kb": 1024, 00:46:42.752 "process_max_bandwidth_mb_sec": 0 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_iscsi_set_options", 00:46:42.752 "params": { 00:46:42.752 "timeout_sec": 30 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_nvme_set_options", 00:46:42.752 "params": { 00:46:42.752 "action_on_timeout": "none", 00:46:42.752 "timeout_us": 0, 00:46:42.752 "timeout_admin_us": 0, 00:46:42.752 "keep_alive_timeout_ms": 10000, 00:46:42.752 "arbitration_burst": 0, 00:46:42.752 "low_priority_weight": 0, 00:46:42.752 "medium_priority_weight": 0, 00:46:42.752 "high_priority_weight": 0, 00:46:42.752 "nvme_adminq_poll_period_us": 10000, 00:46:42.752 "nvme_ioq_poll_period_us": 0, 00:46:42.752 "io_queue_requests": 512, 00:46:42.752 "delay_cmd_submit": true, 00:46:42.752 "transport_retry_count": 4, 00:46:42.752 "bdev_retry_count": 3, 00:46:42.752 "transport_ack_timeout": 0, 00:46:42.752 "ctrlr_loss_timeout_sec": 0, 00:46:42.752 "reconnect_delay_sec": 0, 00:46:42.752 "fast_io_fail_timeout_sec": 0, 00:46:42.752 "disable_auto_failback": false, 00:46:42.752 "generate_uuids": false, 00:46:42.752 "transport_tos": 0, 00:46:42.752 "nvme_error_stat": false, 00:46:42.752 "rdma_srq_size": 0, 00:46:42.752 "io_path_stat": false, 00:46:42.752 "allow_accel_sequence": false, 00:46:42.752 "rdma_max_cq_size": 0, 00:46:42.752 "rdma_cm_event_timeout_ms": 0, 00:46:42.752 "dhchap_digests": [ 00:46:42.752 "sha256", 00:46:42.752 "sha384", 00:46:42.752 "sha512" 00:46:42.752 ], 00:46:42.752 "dhchap_dhgroups": [ 00:46:42.752 "null", 00:46:42.752 "ffdhe2048", 00:46:42.752 "ffdhe3072", 00:46:42.752 "ffdhe4096", 00:46:42.752 "ffdhe6144", 00:46:42.752 "ffdhe8192" 00:46:42.752 ] 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_nvme_attach_controller", 00:46:42.752 "params": { 00:46:42.752 "name": "TLSTEST", 00:46:42.752 "trtype": "TCP", 00:46:42.752 "adrfam": "IPv4", 00:46:42.752 "traddr": "10.0.0.2", 00:46:42.752 "trsvcid": "4420", 00:46:42.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:42.752 "prchk_reftag": false, 00:46:42.752 "prchk_guard": false, 00:46:42.752 "ctrlr_loss_timeout_sec": 0, 00:46:42.752 "reconnect_delay_sec": 0, 00:46:42.752 "fast_io_fail_timeout_sec": 0, 00:46:42.752 "psk": "key0", 00:46:42.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:42.752 "hdgst": false, 00:46:42.752 "ddgst": false, 00:46:42.752 "multipath": "multipath" 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_nvme_set_hotplug", 00:46:42.752 "params": { 00:46:42.752 "period_us": 100000, 00:46:42.752 "enable": false 00:46:42.752 } 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "method": "bdev_wait_for_examine" 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }, 00:46:42.752 { 00:46:42.752 "subsystem": "nbd", 00:46:42.752 "config": [] 00:46:42.752 } 00:46:42.752 ] 00:46:42.752 }' 00:46:42.752 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 672799 00:46:42.752 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 672799 ']' 00:46:42.752 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 672799 00:46:42.752 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672799 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672799' 00:46:42.753 killing process with pid 672799 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 672799 00:46:42.753 Received shutdown signal, test time was about 10.000000 seconds 00:46:42.753 00:46:42.753 Latency(us) 00:46:42.753 [2024-12-09T04:41:36.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:42.753 [2024-12-09T04:41:36.978Z] =================================================================================================================== 00:46:42.753 [2024-12-09T04:41:36.978Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:42.753 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 672799 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 672513 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 672513 ']' 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 672513 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672513 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672513' 00:46:43.010 killing process with pid 672513 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 672513 00:46:43.010 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 672513 00:46:43.269 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:46:43.269 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:43.269 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:46:43.269 "subsystems": [ 00:46:43.269 { 00:46:43.269 "subsystem": "keyring", 00:46:43.269 "config": [ 00:46:43.269 { 00:46:43.269 "method": "keyring_file_add_key", 00:46:43.269 "params": { 00:46:43.269 "name": "key0", 00:46:43.269 "path": "/tmp/tmp.ZUG3jTm7MP" 00:46:43.269 } 00:46:43.269 } 00:46:43.269 ] 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "subsystem": "iobuf", 00:46:43.269 "config": [ 00:46:43.269 { 00:46:43.269 "method": "iobuf_set_options", 00:46:43.269 "params": { 00:46:43.269 "small_pool_count": 8192, 00:46:43.269 "large_pool_count": 1024, 00:46:43.269 "small_bufsize": 8192, 00:46:43.269 "large_bufsize": 135168, 00:46:43.269 "enable_numa": false 00:46:43.269 } 00:46:43.269 } 00:46:43.269 ] 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "subsystem": "sock", 00:46:43.269 "config": [ 00:46:43.269 { 00:46:43.269 "method": "sock_set_default_impl", 00:46:43.269 "params": { 00:46:43.269 "impl_name": "posix" 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "sock_impl_set_options", 00:46:43.269 "params": { 00:46:43.269 "impl_name": "ssl", 00:46:43.269 "recv_buf_size": 4096, 00:46:43.269 "send_buf_size": 4096, 00:46:43.269 "enable_recv_pipe": true, 00:46:43.269 "enable_quickack": false, 00:46:43.269 "enable_placement_id": 0, 00:46:43.269 "enable_zerocopy_send_server": true, 00:46:43.269 "enable_zerocopy_send_client": false, 00:46:43.269 "zerocopy_threshold": 0, 00:46:43.269 "tls_version": 0, 00:46:43.269 "enable_ktls": false 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "sock_impl_set_options", 00:46:43.269 "params": { 00:46:43.269 "impl_name": "posix", 00:46:43.269 "recv_buf_size": 2097152, 00:46:43.269 "send_buf_size": 2097152, 00:46:43.269 "enable_recv_pipe": true, 00:46:43.269 "enable_quickack": false, 00:46:43.269 "enable_placement_id": 0, 00:46:43.269 "enable_zerocopy_send_server": true, 00:46:43.269 "enable_zerocopy_send_client": false, 00:46:43.269 "zerocopy_threshold": 0, 00:46:43.269 "tls_version": 0, 00:46:43.269 "enable_ktls": false 00:46:43.269 } 00:46:43.269 } 00:46:43.269 ] 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "subsystem": "vmd", 00:46:43.269 "config": [] 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "subsystem": "accel", 00:46:43.269 "config": [ 00:46:43.269 { 00:46:43.269 "method": "accel_set_options", 00:46:43.269 "params": { 00:46:43.269 "small_cache_size": 128, 00:46:43.269 "large_cache_size": 16, 00:46:43.269 "task_count": 2048, 00:46:43.269 "sequence_count": 2048, 00:46:43.269 "buf_count": 2048 00:46:43.269 } 00:46:43.269 } 00:46:43.269 ] 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "subsystem": "bdev", 00:46:43.269 "config": [ 00:46:43.269 { 00:46:43.269 "method": "bdev_set_options", 00:46:43.269 "params": { 00:46:43.269 "bdev_io_pool_size": 65535, 00:46:43.269 "bdev_io_cache_size": 256, 00:46:43.269 "bdev_auto_examine": true, 00:46:43.269 "iobuf_small_cache_size": 128, 00:46:43.269 "iobuf_large_cache_size": 16 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "bdev_raid_set_options", 00:46:43.269 "params": { 00:46:43.269 "process_window_size_kb": 1024, 00:46:43.269 "process_max_bandwidth_mb_sec": 0 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "bdev_iscsi_set_options", 00:46:43.269 "params": { 00:46:43.269 "timeout_sec": 30 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "bdev_nvme_set_options", 00:46:43.269 "params": { 00:46:43.269 "action_on_timeout": "none", 00:46:43.269 "timeout_us": 0, 00:46:43.269 "timeout_admin_us": 0, 00:46:43.269 "keep_alive_timeout_ms": 10000, 00:46:43.269 "arbitration_burst": 0, 00:46:43.269 "low_priority_weight": 0, 00:46:43.269 "medium_priority_weight": 0, 00:46:43.269 "high_priority_weight": 0, 00:46:43.269 "nvme_adminq_poll_period_us": 10000, 00:46:43.269 "nvme_ioq_poll_period_us": 0, 00:46:43.269 "io_queue_requests": 0, 00:46:43.269 "delay_cmd_submit": true, 00:46:43.269 "transport_retry_count": 4, 00:46:43.269 "bdev_retry_count": 3, 00:46:43.269 "transport_ack_timeout": 0, 00:46:43.269 "ctrlr_loss_timeout_sec": 0, 00:46:43.269 "reconnect_delay_sec": 0, 00:46:43.269 "fast_io_fail_timeout_sec": 0, 00:46:43.269 "disable_auto_failback": false, 00:46:43.269 "generate_uuids": false, 00:46:43.269 "transport_tos": 0, 00:46:43.269 "nvme_error_stat": false, 00:46:43.269 "rdma_srq_size": 0, 00:46:43.269 "io_path_stat": false, 00:46:43.269 "allow_accel_sequence": false, 00:46:43.269 "rdma_max_cq_size": 0, 00:46:43.269 "rdma_cm_event_timeout_ms": 0, 00:46:43.269 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:43.269 "dhchap_digests": [ 00:46:43.269 "sha256", 00:46:43.269 "sha384", 00:46:43.269 "sha512" 00:46:43.269 ], 00:46:43.269 "dhchap_dhgroups": [ 00:46:43.269 "null", 00:46:43.269 "ffdhe2048", 00:46:43.269 "ffdhe3072", 00:46:43.269 "ffdhe4096", 00:46:43.269 "ffdhe6144", 00:46:43.269 "ffdhe8192" 00:46:43.269 ] 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "bdev_nvme_set_hotplug", 00:46:43.269 "params": { 00:46:43.269 "period_us": 100000, 00:46:43.269 "enable": false 00:46:43.269 } 00:46:43.269 }, 00:46:43.269 { 00:46:43.269 "method": "bdev_malloc_create", 00:46:43.269 "params": { 00:46:43.269 "name": "malloc0", 00:46:43.269 "num_blocks": 8192, 00:46:43.269 "block_size": 4096, 00:46:43.270 "physical_block_size": 4096, 00:46:43.270 "uuid": "d31fb2da-ed8c-48d5-8d42-6d0c13558151", 00:46:43.270 "optimal_io_boundary": 0, 00:46:43.270 "md_size": 0, 00:46:43.270 "dif_type": 0, 00:46:43.270 "dif_is_head_of_md": false, 00:46:43.270 "dif_pi_format": 0 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "bdev_wait_for_examine" 00:46:43.270 } 00:46:43.270 ] 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "subsystem": "nbd", 00:46:43.270 "config": [] 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "subsystem": "scheduler", 00:46:43.270 "config": [ 00:46:43.270 { 00:46:43.270 "method": "framework_set_scheduler", 00:46:43.270 "params": { 00:46:43.270 "name": "static" 00:46:43.270 } 00:46:43.270 } 00:46:43.270 ] 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "subsystem": "nvmf", 00:46:43.270 "config": [ 00:46:43.270 { 00:46:43.270 "method": "nvmf_set_config", 00:46:43.270 "params": { 00:46:43.270 "discovery_filter": "match_any", 00:46:43.270 "admin_cmd_passthru": { 00:46:43.270 "identify_ctrlr": false 00:46:43.270 }, 00:46:43.270 "dhchap_digests": [ 00:46:43.270 "sha256", 00:46:43.270 "sha384", 00:46:43.270 "sha512" 00:46:43.270 ], 00:46:43.270 "dhchap_dhgroups": [ 00:46:43.270 "null", 00:46:43.270 "ffdhe2048", 00:46:43.270 "ffdhe3072", 00:46:43.270 "ffdhe4096", 00:46:43.270 "ffdhe6144", 00:46:43.270 "ffdhe8192" 00:46:43.270 ] 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_set_max_subsystems", 00:46:43.270 "params": { 00:46:43.270 "max_subsystems": 1024 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_set_crdt", 00:46:43.270 "params": { 00:46:43.270 "crdt1": 0, 00:46:43.270 "crdt2": 0, 00:46:43.270 "crdt3": 0 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_create_transport", 00:46:43.270 "params": { 00:46:43.270 "trtype": "TCP", 00:46:43.270 "max_queue_depth": 128, 00:46:43.270 "max_io_qpairs_per_ctrlr": 127, 00:46:43.270 "in_capsule_data_size": 4096, 00:46:43.270 "max_io_size": 131072, 00:46:43.270 "io_unit_size": 131072, 00:46:43.270 "max_aq_depth": 128, 00:46:43.270 "num_shared_buffers": 511, 00:46:43.270 "buf_cache_size": 4294967295, 00:46:43.270 "dif_insert_or_strip": false, 00:46:43.270 "zcopy": false, 00:46:43.270 "c2h_success": false, 00:46:43.270 "sock_priority": 0, 00:46:43.270 "abort_timeout_sec": 1, 00:46:43.270 "ack_timeout": 0, 00:46:43.270 "data_wr_pool_size": 0 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_create_subsystem", 00:46:43.270 "params": { 00:46:43.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.270 "allow_any_host": false, 00:46:43.270 "serial_number": "SPDK00000000000001", 00:46:43.270 "model_number": "SPDK bdev Controller", 00:46:43.270 "max_namespaces": 10, 00:46:43.270 "min_cntlid": 1, 00:46:43.270 "max_cntlid": 65519, 00:46:43.270 "ana_reporting": false 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_subsystem_add_host", 00:46:43.270 "params": { 00:46:43.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.270 "host": "nqn.2016-06.io.spdk:host1", 00:46:43.270 "psk": "key0" 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_subsystem_add_ns", 00:46:43.270 "params": { 00:46:43.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.270 "namespace": { 00:46:43.270 "nsid": 1, 00:46:43.270 "bdev_name": "malloc0", 00:46:43.270 "nguid": "D31FB2DAED8C48D58D426D0C13558151", 00:46:43.270 "uuid": "d31fb2da-ed8c-48d5-8d42-6d0c13558151", 00:46:43.270 "no_auto_visible": false 00:46:43.270 } 00:46:43.270 } 00:46:43.270 }, 00:46:43.270 { 00:46:43.270 "method": "nvmf_subsystem_add_listener", 00:46:43.270 "params": { 00:46:43.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.270 "listen_address": { 00:46:43.270 "trtype": "TCP", 00:46:43.270 "adrfam": "IPv4", 00:46:43.270 "traddr": "10.0.0.2", 00:46:43.270 "trsvcid": "4420" 00:46:43.270 }, 00:46:43.270 "secure_channel": true 00:46:43.270 } 00:46:43.270 } 00:46:43.270 ] 00:46:43.270 } 00:46:43.270 ] 00:46:43.270 }' 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=673082 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 673082 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 673082 ']' 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:43.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:43.270 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:43.270 [2024-12-09 05:41:37.448164] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:43.270 [2024-12-09 05:41:37.448256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:43.528 [2024-12-09 05:41:37.520421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:43.528 [2024-12-09 05:41:37.578797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:43.528 [2024-12-09 05:41:37.578857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:43.528 [2024-12-09 05:41:37.578880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:43.528 [2024-12-09 05:41:37.578890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:43.528 [2024-12-09 05:41:37.578900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:43.528 [2024-12-09 05:41:37.579462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:43.786 [2024-12-09 05:41:37.829959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:43.786 [2024-12-09 05:41:37.861986] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:43.786 [2024-12-09 05:41:37.862294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=673230 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 673230 /var/tmp/bdevperf.sock 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 673230 ']' 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:44.353 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:46:44.353 "subsystems": [ 00:46:44.353 { 00:46:44.353 "subsystem": "keyring", 00:46:44.353 "config": [ 00:46:44.353 { 00:46:44.353 "method": "keyring_file_add_key", 00:46:44.353 "params": { 00:46:44.353 "name": "key0", 00:46:44.353 "path": "/tmp/tmp.ZUG3jTm7MP" 00:46:44.353 } 00:46:44.353 } 00:46:44.353 ] 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "subsystem": "iobuf", 00:46:44.353 "config": [ 00:46:44.353 { 00:46:44.353 "method": "iobuf_set_options", 00:46:44.353 "params": { 00:46:44.353 "small_pool_count": 8192, 00:46:44.353 "large_pool_count": 1024, 00:46:44.353 "small_bufsize": 8192, 00:46:44.353 "large_bufsize": 135168, 00:46:44.353 "enable_numa": false 00:46:44.353 } 00:46:44.353 } 00:46:44.353 ] 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "subsystem": "sock", 00:46:44.353 "config": [ 00:46:44.353 { 00:46:44.353 "method": "sock_set_default_impl", 00:46:44.353 "params": { 00:46:44.353 "impl_name": "posix" 00:46:44.353 } 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "method": "sock_impl_set_options", 00:46:44.353 "params": { 00:46:44.353 "impl_name": "ssl", 00:46:44.353 "recv_buf_size": 4096, 00:46:44.353 "send_buf_size": 4096, 00:46:44.353 "enable_recv_pipe": true, 00:46:44.353 "enable_quickack": false, 00:46:44.353 "enable_placement_id": 0, 00:46:44.353 "enable_zerocopy_send_server": true, 00:46:44.353 "enable_zerocopy_send_client": false, 00:46:44.353 "zerocopy_threshold": 0, 00:46:44.353 "tls_version": 0, 00:46:44.353 "enable_ktls": false 00:46:44.353 } 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "method": "sock_impl_set_options", 00:46:44.353 "params": { 00:46:44.353 "impl_name": "posix", 00:46:44.353 "recv_buf_size": 2097152, 00:46:44.353 "send_buf_size": 2097152, 00:46:44.353 "enable_recv_pipe": true, 00:46:44.353 "enable_quickack": false, 00:46:44.353 "enable_placement_id": 0, 00:46:44.353 "enable_zerocopy_send_server": true, 00:46:44.353 "enable_zerocopy_send_client": false, 00:46:44.353 "zerocopy_threshold": 0, 00:46:44.353 "tls_version": 0, 00:46:44.353 "enable_ktls": false 00:46:44.353 } 00:46:44.353 } 00:46:44.353 ] 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "subsystem": "vmd", 00:46:44.353 "config": [] 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "subsystem": "accel", 00:46:44.353 "config": [ 00:46:44.353 { 00:46:44.353 "method": "accel_set_options", 00:46:44.353 "params": { 00:46:44.353 "small_cache_size": 128, 00:46:44.353 "large_cache_size": 16, 00:46:44.353 "task_count": 2048, 00:46:44.353 "sequence_count": 2048, 00:46:44.353 "buf_count": 2048 00:46:44.353 } 00:46:44.353 } 00:46:44.353 ] 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "subsystem": "bdev", 00:46:44.353 "config": [ 00:46:44.353 { 00:46:44.353 "method": "bdev_set_options", 00:46:44.353 "params": { 00:46:44.353 "bdev_io_pool_size": 65535, 00:46:44.353 "bdev_io_cache_size": 256, 00:46:44.353 "bdev_auto_examine": true, 00:46:44.353 "iobuf_small_cache_size": 128, 00:46:44.353 "iobuf_large_cache_size": 16 00:46:44.353 } 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "method": "bdev_raid_set_options", 00:46:44.353 "params": { 00:46:44.353 "process_window_size_kb": 1024, 00:46:44.353 "process_max_bandwidth_mb_sec": 0 00:46:44.353 } 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "method": "bdev_iscsi_set_options", 00:46:44.353 "params": { 00:46:44.353 "timeout_sec": 30 00:46:44.353 } 00:46:44.353 }, 00:46:44.353 { 00:46:44.353 "method": "bdev_nvme_set_options", 00:46:44.353 "params": { 00:46:44.353 "action_on_timeout": "none", 00:46:44.353 "timeout_us": 0, 00:46:44.353 "timeout_admin_us": 0, 00:46:44.353 "keep_alive_timeout_ms": 10000, 00:46:44.353 "arbitration_burst": 0, 00:46:44.353 "low_priority_weight": 0, 00:46:44.353 "medium_priority_weight": 0, 00:46:44.353 "high_priority_weight": 0, 00:46:44.353 "nvme_adminq_poll_period_us": 10000, 00:46:44.353 "nvme_ioq_poll_period_us": 0, 00:46:44.353 "io_queue_requests": 512, 00:46:44.353 "delay_cmd_submit": true, 00:46:44.353 "transport_retry_count": 4, 00:46:44.353 "bdev_retry_count": 3, 00:46:44.353 "transport_ack_timeout": 0, 00:46:44.353 "ctrlr_loss_timeout_sec": 0, 00:46:44.353 "reconnect_delay_sec": 0, 00:46:44.353 "fast_io_fail_timeout_sec": 0, 00:46:44.353 "disable_auto_failback": false, 00:46:44.353 "generate_uuids": false, 00:46:44.353 "transport_tos": 0, 00:46:44.353 "nvme_error_stat": false, 00:46:44.353 "rdma_srq_size": 0, 00:46:44.354 "io_path_stat": false, 00:46:44.354 "allow_accel_sequence": false, 00:46:44.354 "rdma_max_cq_size": 0, 00:46:44.354 "rdma_cm_event_timeout_ms": 0, 00:46:44.354 "dhchap_digests": [ 00:46:44.354 "sha256", 00:46:44.354 "sha384", 00:46:44.354 "sha512" 00:46:44.354 ], 00:46:44.354 "dhchap_dhgroups": [ 00:46:44.354 "null", 00:46:44.354 "ffdhe2048", 00:46:44.354 "ffdhe3072", 00:46:44.354 "ffdhe4096", 00:46:44.354 "ffdhe6144", 00:46:44.354 "ffdhe8192" 00:46:44.354 ] 00:46:44.354 } 00:46:44.354 }, 00:46:44.354 { 00:46:44.354 "method": "bdev_nvme_attach_controller", 00:46:44.354 "params": { 00:46:44.354 "name": "TLSTEST", 00:46:44.354 "trtype": "TCP", 00:46:44.354 "adrfam": "IPv4", 00:46:44.354 "traddr": "10.0.0.2", 00:46:44.354 "trsvcid": "4420", 00:46:44.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:44.354 "prchk_reftag": false, 00:46:44.354 "prchk_guard": false, 00:46:44.354 "ctrlr_loss_timeout_sec": 0, 00:46:44.354 "reconnect_delay_sec": 0, 00:46:44.354 "fast_io_fail_timeout_sec": 0, 00:46:44.354 "psk": "key0", 00:46:44.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:44.354 "hdgst": false, 00:46:44.354 "ddgst": false, 00:46:44.354 "multipath": "multipath" 00:46:44.354 } 00:46:44.354 }, 00:46:44.354 { 00:46:44.354 "method": "bdev_nvme_set_hotplug", 00:46:44.354 "params": { 00:46:44.354 "period_us": 100000, 00:46:44.354 "enable": false 00:46:44.354 } 00:46:44.354 }, 00:46:44.354 { 00:46:44.354 "method": "bdev_wait_for_examine" 00:46:44.354 } 00:46:44.354 ] 00:46:44.354 }, 00:46:44.354 { 00:46:44.354 "subsystem": "nbd", 00:46:44.354 "config": [] 00:46:44.354 } 00:46:44.354 ] 00:46:44.354 }' 00:46:44.354 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:44.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:44.354 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:44.354 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:44.354 [2024-12-09 05:41:38.576982] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:44.354 [2024-12-09 05:41:38.577068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673230 ] 00:46:44.612 [2024-12-09 05:41:38.643805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:44.612 [2024-12-09 05:41:38.702348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:44.869 [2024-12-09 05:41:38.883799] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:44.869 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:44.869 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:44.869 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:45.127 Running I/O for 10 seconds... 00:46:46.992 3580.00 IOPS, 13.98 MiB/s [2024-12-09T04:41:42.150Z] 3620.50 IOPS, 14.14 MiB/s [2024-12-09T04:41:43.521Z] 3617.00 IOPS, 14.13 MiB/s [2024-12-09T04:41:44.451Z] 3624.25 IOPS, 14.16 MiB/s [2024-12-09T04:41:45.380Z] 3627.20 IOPS, 14.17 MiB/s [2024-12-09T04:41:46.309Z] 3615.83 IOPS, 14.12 MiB/s [2024-12-09T04:41:47.239Z] 3583.29 IOPS, 14.00 MiB/s [2024-12-09T04:41:48.171Z] 3584.12 IOPS, 14.00 MiB/s [2024-12-09T04:41:49.578Z] 3577.78 IOPS, 13.98 MiB/s [2024-12-09T04:41:49.578Z] 3580.10 IOPS, 13.98 MiB/s 00:46:55.353 Latency(us) 00:46:55.353 [2024-12-09T04:41:49.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.353 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:55.353 Verification LBA range: start 0x0 length 0x2000 00:46:55.353 TLSTESTn1 : 10.02 3583.97 14.00 0.00 0.00 35647.14 5995.33 29709.65 00:46:55.353 [2024-12-09T04:41:49.578Z] =================================================================================================================== 00:46:55.353 [2024-12-09T04:41:49.578Z] Total : 3583.97 14.00 0.00 0.00 35647.14 5995.33 29709.65 00:46:55.353 { 00:46:55.353 "results": [ 00:46:55.353 { 00:46:55.353 "job": "TLSTESTn1", 00:46:55.353 "core_mask": "0x4", 00:46:55.353 "workload": "verify", 00:46:55.353 "status": "finished", 00:46:55.353 "verify_range": { 00:46:55.353 "start": 0, 00:46:55.353 "length": 8192 00:46:55.353 }, 00:46:55.353 "queue_depth": 128, 00:46:55.353 "io_size": 4096, 00:46:55.353 "runtime": 10.024087, 00:46:55.353 "iops": 3583.967297969381, 00:46:55.353 "mibps": 13.999872257692894, 00:46:55.353 "io_failed": 0, 00:46:55.353 "io_timeout": 0, 00:46:55.353 "avg_latency_us": 35647.13614165744, 00:46:55.353 "min_latency_us": 5995.3303703703705, 00:46:55.353 "max_latency_us": 29709.653333333332 00:46:55.353 } 00:46:55.353 ], 00:46:55.353 "core_count": 1 00:46:55.353 } 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 673230 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 673230 ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 673230 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 673230 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 673230' 00:46:55.353 killing process with pid 673230 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 673230 00:46:55.353 Received shutdown signal, test time was about 10.000000 seconds 00:46:55.353 00:46:55.353 Latency(us) 00:46:55.353 [2024-12-09T04:41:49.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.353 [2024-12-09T04:41:49.578Z] =================================================================================================================== 00:46:55.353 [2024-12-09T04:41:49.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 673230 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 673082 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 673082 ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 673082 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 673082 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 673082' 00:46:55.353 killing process with pid 673082 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 673082 00:46:55.353 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 673082 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=674562 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 674562 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 674562 ']' 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:55.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:55.689 05:41:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.689 [2024-12-09 05:41:49.869976] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:55.689 [2024-12-09 05:41:49.870068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:55.970 [2024-12-09 05:41:49.942490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:55.970 [2024-12-09 05:41:49.997380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:55.970 [2024-12-09 05:41:49.997438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:55.970 [2024-12-09 05:41:49.997463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:55.970 [2024-12-09 05:41:49.997483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:55.970 [2024-12-09 05:41:49.997493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:55.970 [2024-12-09 05:41:49.998056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ZUG3jTm7MP 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZUG3jTm7MP 00:46:55.970 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:56.228 [2024-12-09 05:41:50.397748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:56.228 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:56.793 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:56.793 [2024-12-09 05:41:50.983307] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:56.793 [2024-12-09 05:41:50.983569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:56.793 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:57.358 malloc0 00:46:57.358 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:57.358 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:57.616 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=674853 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 674853 /var/tmp/bdevperf.sock 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 674853 ']' 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:58.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:58.180 [2024-12-09 05:41:52.143852] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:46:58.180 [2024-12-09 05:41:52.143938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674853 ] 00:46:58.180 [2024-12-09 05:41:52.208662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:58.180 [2024-12-09 05:41:52.264352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:46:58.180 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:46:58.437 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:58.693 [2024-12-09 05:41:52.885614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:58.950 nvme0n1 00:46:58.950 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:58.950 Running I/O for 1 seconds... 00:46:59.915 3364.00 IOPS, 13.14 MiB/s 00:46:59.915 Latency(us) 00:46:59.915 [2024-12-09T04:41:54.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:59.915 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:59.915 Verification LBA range: start 0x0 length 0x2000 00:46:59.915 nvme0n1 : 1.02 3426.28 13.38 0.00 0.00 37046.91 6116.69 42525.58 00:46:59.915 [2024-12-09T04:41:54.140Z] =================================================================================================================== 00:46:59.915 [2024-12-09T04:41:54.140Z] Total : 3426.28 13.38 0.00 0.00 37046.91 6116.69 42525.58 00:46:59.915 { 00:46:59.915 "results": [ 00:46:59.915 { 00:46:59.915 "job": "nvme0n1", 00:46:59.915 "core_mask": "0x2", 00:46:59.915 "workload": "verify", 00:46:59.915 "status": "finished", 00:46:59.915 "verify_range": { 00:46:59.915 "start": 0, 00:46:59.915 "length": 8192 00:46:59.915 }, 00:46:59.915 "queue_depth": 128, 00:46:59.915 "io_size": 4096, 00:46:59.915 "runtime": 1.019473, 00:46:59.915 "iops": 3426.2800486133524, 00:46:59.915 "mibps": 13.383906439895908, 00:46:59.915 "io_failed": 0, 00:46:59.915 "io_timeout": 0, 00:46:59.915 "avg_latency_us": 37046.914797213474, 00:46:59.915 "min_latency_us": 6116.693333333334, 00:46:59.915 "max_latency_us": 42525.58222222222 00:46:59.915 } 00:46:59.915 ], 00:46:59.915 "core_count": 1 00:46:59.915 } 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 674853 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 674853 ']' 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 674853 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:59.915 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 674853 00:47:00.172 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:00.172 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:00.172 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 674853' 00:47:00.172 killing process with pid 674853 00:47:00.172 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 674853 00:47:00.172 Received shutdown signal, test time was about 1.000000 seconds 00:47:00.172 00:47:00.172 Latency(us) 00:47:00.172 [2024-12-09T04:41:54.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:00.172 [2024-12-09T04:41:54.397Z] =================================================================================================================== 00:47:00.172 [2024-12-09T04:41:54.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:00.172 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 674853 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 674562 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 674562 ']' 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 674562 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 674562 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 674562' 00:47:00.429 killing process with pid 674562 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 674562 00:47:00.429 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 674562 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=675247 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 675247 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 675247 ']' 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:00.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:00.686 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:00.686 [2024-12-09 05:41:54.792127] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:00.686 [2024-12-09 05:41:54.792216] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:00.686 [2024-12-09 05:41:54.865300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:00.943 [2024-12-09 05:41:54.921414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:00.943 [2024-12-09 05:41:54.921465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:00.943 [2024-12-09 05:41:54.921488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:00.943 [2024-12-09 05:41:54.921521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:00.943 [2024-12-09 05:41:54.921532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:00.943 [2024-12-09 05:41:54.922100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:00.943 [2024-12-09 05:41:55.070007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:00.943 malloc0 00:47:00.943 [2024-12-09 05:41:55.101664] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:00.943 [2024-12-09 05:41:55.101930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=675273 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 675273 /var/tmp/bdevperf.sock 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 675273 ']' 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:00.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:00.943 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:01.199 [2024-12-09 05:41:55.173243] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:01.199 [2024-12-09 05:41:55.173332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675273 ] 00:47:01.200 [2024-12-09 05:41:55.238780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:01.200 [2024-12-09 05:41:55.294755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:01.456 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:01.456 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:01.456 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZUG3jTm7MP 00:47:01.713 05:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:47:01.972 [2024-12-09 05:41:55.967572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:01.972 nvme0n1 00:47:01.972 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:01.972 Running I/O for 1 seconds... 00:47:03.346 3494.00 IOPS, 13.65 MiB/s 00:47:03.346 Latency(us) 00:47:03.346 [2024-12-09T04:41:57.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:03.346 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:03.346 Verification LBA range: start 0x0 length 0x2000 00:47:03.346 nvme0n1 : 1.02 3556.13 13.89 0.00 0.00 35679.34 6262.33 27185.30 00:47:03.346 [2024-12-09T04:41:57.571Z] =================================================================================================================== 00:47:03.346 [2024-12-09T04:41:57.571Z] Total : 3556.13 13.89 0.00 0.00 35679.34 6262.33 27185.30 00:47:03.346 { 00:47:03.346 "results": [ 00:47:03.346 { 00:47:03.346 "job": "nvme0n1", 00:47:03.346 "core_mask": "0x2", 00:47:03.346 "workload": "verify", 00:47:03.346 "status": "finished", 00:47:03.346 "verify_range": { 00:47:03.346 "start": 0, 00:47:03.346 "length": 8192 00:47:03.346 }, 00:47:03.346 "queue_depth": 128, 00:47:03.346 "io_size": 4096, 00:47:03.346 "runtime": 1.018524, 00:47:03.346 "iops": 3556.1263161201896, 00:47:03.346 "mibps": 13.89111842234449, 00:47:03.346 "io_failed": 0, 00:47:03.346 "io_timeout": 0, 00:47:03.346 "avg_latency_us": 35679.337317217825, 00:47:03.346 "min_latency_us": 6262.328888888889, 00:47:03.346 "max_latency_us": 27185.303703703703 00:47:03.346 } 00:47:03.346 ], 00:47:03.346 "core_count": 1 00:47:03.346 } 00:47:03.346 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:47:03.346 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.346 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:03.346 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.346 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:47:03.346 "subsystems": [ 00:47:03.346 { 00:47:03.346 "subsystem": "keyring", 00:47:03.346 "config": [ 00:47:03.346 { 00:47:03.346 "method": "keyring_file_add_key", 00:47:03.346 "params": { 00:47:03.346 "name": "key0", 00:47:03.346 "path": "/tmp/tmp.ZUG3jTm7MP" 00:47:03.346 } 00:47:03.346 } 00:47:03.346 ] 00:47:03.346 }, 00:47:03.346 { 00:47:03.346 "subsystem": "iobuf", 00:47:03.346 "config": [ 00:47:03.346 { 00:47:03.346 "method": "iobuf_set_options", 00:47:03.346 "params": { 00:47:03.346 "small_pool_count": 8192, 00:47:03.346 "large_pool_count": 1024, 00:47:03.346 "small_bufsize": 8192, 00:47:03.346 "large_bufsize": 135168, 00:47:03.346 "enable_numa": false 00:47:03.346 } 00:47:03.346 } 00:47:03.346 ] 00:47:03.346 }, 00:47:03.346 { 00:47:03.346 "subsystem": "sock", 00:47:03.346 "config": [ 00:47:03.346 { 00:47:03.346 "method": "sock_set_default_impl", 00:47:03.346 "params": { 00:47:03.346 "impl_name": "posix" 00:47:03.346 } 00:47:03.346 }, 00:47:03.346 { 00:47:03.347 "method": "sock_impl_set_options", 00:47:03.347 "params": { 00:47:03.347 "impl_name": "ssl", 00:47:03.347 "recv_buf_size": 4096, 00:47:03.347 "send_buf_size": 4096, 00:47:03.347 "enable_recv_pipe": true, 00:47:03.347 "enable_quickack": false, 00:47:03.347 "enable_placement_id": 0, 00:47:03.347 "enable_zerocopy_send_server": true, 00:47:03.347 "enable_zerocopy_send_client": false, 00:47:03.347 "zerocopy_threshold": 0, 00:47:03.347 "tls_version": 0, 00:47:03.347 "enable_ktls": false 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "sock_impl_set_options", 00:47:03.347 "params": { 00:47:03.347 "impl_name": "posix", 00:47:03.347 "recv_buf_size": 2097152, 00:47:03.347 "send_buf_size": 2097152, 00:47:03.347 "enable_recv_pipe": true, 00:47:03.347 "enable_quickack": false, 00:47:03.347 "enable_placement_id": 0, 00:47:03.347 "enable_zerocopy_send_server": true, 00:47:03.347 "enable_zerocopy_send_client": false, 00:47:03.347 "zerocopy_threshold": 0, 00:47:03.347 "tls_version": 0, 00:47:03.347 "enable_ktls": false 00:47:03.347 } 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "vmd", 00:47:03.347 "config": [] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "accel", 00:47:03.347 "config": [ 00:47:03.347 { 00:47:03.347 "method": "accel_set_options", 00:47:03.347 "params": { 00:47:03.347 "small_cache_size": 128, 00:47:03.347 "large_cache_size": 16, 00:47:03.347 "task_count": 2048, 00:47:03.347 "sequence_count": 2048, 00:47:03.347 "buf_count": 2048 00:47:03.347 } 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "bdev", 00:47:03.347 "config": [ 00:47:03.347 { 00:47:03.347 "method": "bdev_set_options", 00:47:03.347 "params": { 00:47:03.347 "bdev_io_pool_size": 65535, 00:47:03.347 "bdev_io_cache_size": 256, 00:47:03.347 "bdev_auto_examine": true, 00:47:03.347 "iobuf_small_cache_size": 128, 00:47:03.347 "iobuf_large_cache_size": 16 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_raid_set_options", 00:47:03.347 "params": { 00:47:03.347 "process_window_size_kb": 1024, 00:47:03.347 "process_max_bandwidth_mb_sec": 0 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_iscsi_set_options", 00:47:03.347 "params": { 00:47:03.347 "timeout_sec": 30 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_nvme_set_options", 00:47:03.347 "params": { 00:47:03.347 "action_on_timeout": "none", 00:47:03.347 "timeout_us": 0, 00:47:03.347 "timeout_admin_us": 0, 00:47:03.347 "keep_alive_timeout_ms": 10000, 00:47:03.347 "arbitration_burst": 0, 00:47:03.347 "low_priority_weight": 0, 00:47:03.347 "medium_priority_weight": 0, 00:47:03.347 "high_priority_weight": 0, 00:47:03.347 "nvme_adminq_poll_period_us": 10000, 00:47:03.347 "nvme_ioq_poll_period_us": 0, 00:47:03.347 "io_queue_requests": 0, 00:47:03.347 "delay_cmd_submit": true, 00:47:03.347 "transport_retry_count": 4, 00:47:03.347 "bdev_retry_count": 3, 00:47:03.347 "transport_ack_timeout": 0, 00:47:03.347 "ctrlr_loss_timeout_sec": 0, 00:47:03.347 "reconnect_delay_sec": 0, 00:47:03.347 "fast_io_fail_timeout_sec": 0, 00:47:03.347 "disable_auto_failback": false, 00:47:03.347 "generate_uuids": false, 00:47:03.347 "transport_tos": 0, 00:47:03.347 "nvme_error_stat": false, 00:47:03.347 "rdma_srq_size": 0, 00:47:03.347 "io_path_stat": false, 00:47:03.347 "allow_accel_sequence": false, 00:47:03.347 "rdma_max_cq_size": 0, 00:47:03.347 "rdma_cm_event_timeout_ms": 0, 00:47:03.347 "dhchap_digests": [ 00:47:03.347 "sha256", 00:47:03.347 "sha384", 00:47:03.347 "sha512" 00:47:03.347 ], 00:47:03.347 "dhchap_dhgroups": [ 00:47:03.347 "null", 00:47:03.347 "ffdhe2048", 00:47:03.347 "ffdhe3072", 00:47:03.347 "ffdhe4096", 00:47:03.347 "ffdhe6144", 00:47:03.347 "ffdhe8192" 00:47:03.347 ] 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_nvme_set_hotplug", 00:47:03.347 "params": { 00:47:03.347 "period_us": 100000, 00:47:03.347 "enable": false 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_malloc_create", 00:47:03.347 "params": { 00:47:03.347 "name": "malloc0", 00:47:03.347 "num_blocks": 8192, 00:47:03.347 "block_size": 4096, 00:47:03.347 "physical_block_size": 4096, 00:47:03.347 "uuid": "48433029-613c-49d2-a8fc-6f20950b1736", 00:47:03.347 "optimal_io_boundary": 0, 00:47:03.347 "md_size": 0, 00:47:03.347 "dif_type": 0, 00:47:03.347 "dif_is_head_of_md": false, 00:47:03.347 "dif_pi_format": 0 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "bdev_wait_for_examine" 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "nbd", 00:47:03.347 "config": [] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "scheduler", 00:47:03.347 "config": [ 00:47:03.347 { 00:47:03.347 "method": "framework_set_scheduler", 00:47:03.347 "params": { 00:47:03.347 "name": "static" 00:47:03.347 } 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "subsystem": "nvmf", 00:47:03.347 "config": [ 00:47:03.347 { 00:47:03.347 "method": "nvmf_set_config", 00:47:03.347 "params": { 00:47:03.347 "discovery_filter": "match_any", 00:47:03.347 "admin_cmd_passthru": { 00:47:03.347 "identify_ctrlr": false 00:47:03.347 }, 00:47:03.347 "dhchap_digests": [ 00:47:03.347 "sha256", 00:47:03.347 "sha384", 00:47:03.347 "sha512" 00:47:03.347 ], 00:47:03.347 "dhchap_dhgroups": [ 00:47:03.347 "null", 00:47:03.347 "ffdhe2048", 00:47:03.347 "ffdhe3072", 00:47:03.347 "ffdhe4096", 00:47:03.347 "ffdhe6144", 00:47:03.347 "ffdhe8192" 00:47:03.347 ] 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_set_max_subsystems", 00:47:03.347 "params": { 00:47:03.347 "max_subsystems": 1024 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_set_crdt", 00:47:03.347 "params": { 00:47:03.347 "crdt1": 0, 00:47:03.347 "crdt2": 0, 00:47:03.347 "crdt3": 0 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_create_transport", 00:47:03.347 "params": { 00:47:03.347 "trtype": "TCP", 00:47:03.347 "max_queue_depth": 128, 00:47:03.347 "max_io_qpairs_per_ctrlr": 127, 00:47:03.347 "in_capsule_data_size": 4096, 00:47:03.347 "max_io_size": 131072, 00:47:03.347 "io_unit_size": 131072, 00:47:03.347 "max_aq_depth": 128, 00:47:03.347 "num_shared_buffers": 511, 00:47:03.347 "buf_cache_size": 4294967295, 00:47:03.347 "dif_insert_or_strip": false, 00:47:03.347 "zcopy": false, 00:47:03.347 "c2h_success": false, 00:47:03.347 "sock_priority": 0, 00:47:03.347 "abort_timeout_sec": 1, 00:47:03.347 "ack_timeout": 0, 00:47:03.347 "data_wr_pool_size": 0 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_create_subsystem", 00:47:03.347 "params": { 00:47:03.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:03.347 "allow_any_host": false, 00:47:03.347 "serial_number": "00000000000000000000", 00:47:03.347 "model_number": "SPDK bdev Controller", 00:47:03.347 "max_namespaces": 32, 00:47:03.347 "min_cntlid": 1, 00:47:03.347 "max_cntlid": 65519, 00:47:03.347 "ana_reporting": false 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_subsystem_add_host", 00:47:03.347 "params": { 00:47:03.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:03.347 "host": "nqn.2016-06.io.spdk:host1", 00:47:03.347 "psk": "key0" 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_subsystem_add_ns", 00:47:03.347 "params": { 00:47:03.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:03.347 "namespace": { 00:47:03.347 "nsid": 1, 00:47:03.347 "bdev_name": "malloc0", 00:47:03.347 "nguid": "48433029613C49D2A8FC6F20950B1736", 00:47:03.347 "uuid": "48433029-613c-49d2-a8fc-6f20950b1736", 00:47:03.347 "no_auto_visible": false 00:47:03.347 } 00:47:03.347 } 00:47:03.347 }, 00:47:03.347 { 00:47:03.347 "method": "nvmf_subsystem_add_listener", 00:47:03.347 "params": { 00:47:03.347 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:03.347 "listen_address": { 00:47:03.347 "trtype": "TCP", 00:47:03.347 "adrfam": "IPv4", 00:47:03.347 "traddr": "10.0.0.2", 00:47:03.347 "trsvcid": "4420" 00:47:03.347 }, 00:47:03.347 "secure_channel": false, 00:47:03.347 "sock_impl": "ssl" 00:47:03.347 } 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 } 00:47:03.347 ] 00:47:03.347 }' 00:47:03.347 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:47:03.606 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:47:03.606 "subsystems": [ 00:47:03.606 { 00:47:03.606 "subsystem": "keyring", 00:47:03.606 "config": [ 00:47:03.606 { 00:47:03.606 "method": "keyring_file_add_key", 00:47:03.606 "params": { 00:47:03.606 "name": "key0", 00:47:03.606 "path": "/tmp/tmp.ZUG3jTm7MP" 00:47:03.606 } 00:47:03.606 } 00:47:03.606 ] 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "subsystem": "iobuf", 00:47:03.606 "config": [ 00:47:03.606 { 00:47:03.606 "method": "iobuf_set_options", 00:47:03.606 "params": { 00:47:03.606 "small_pool_count": 8192, 00:47:03.606 "large_pool_count": 1024, 00:47:03.606 "small_bufsize": 8192, 00:47:03.606 "large_bufsize": 135168, 00:47:03.606 "enable_numa": false 00:47:03.606 } 00:47:03.606 } 00:47:03.606 ] 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "subsystem": "sock", 00:47:03.606 "config": [ 00:47:03.606 { 00:47:03.606 "method": "sock_set_default_impl", 00:47:03.606 "params": { 00:47:03.606 "impl_name": "posix" 00:47:03.606 } 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "method": "sock_impl_set_options", 00:47:03.606 "params": { 00:47:03.606 "impl_name": "ssl", 00:47:03.606 "recv_buf_size": 4096, 00:47:03.606 "send_buf_size": 4096, 00:47:03.606 "enable_recv_pipe": true, 00:47:03.606 "enable_quickack": false, 00:47:03.606 "enable_placement_id": 0, 00:47:03.606 "enable_zerocopy_send_server": true, 00:47:03.606 "enable_zerocopy_send_client": false, 00:47:03.606 "zerocopy_threshold": 0, 00:47:03.606 "tls_version": 0, 00:47:03.606 "enable_ktls": false 00:47:03.606 } 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "method": "sock_impl_set_options", 00:47:03.606 "params": { 00:47:03.606 "impl_name": "posix", 00:47:03.606 "recv_buf_size": 2097152, 00:47:03.606 "send_buf_size": 2097152, 00:47:03.606 "enable_recv_pipe": true, 00:47:03.606 "enable_quickack": false, 00:47:03.606 "enable_placement_id": 0, 00:47:03.606 "enable_zerocopy_send_server": true, 00:47:03.606 "enable_zerocopy_send_client": false, 00:47:03.606 "zerocopy_threshold": 0, 00:47:03.606 "tls_version": 0, 00:47:03.606 "enable_ktls": false 00:47:03.606 } 00:47:03.606 } 00:47:03.606 ] 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "subsystem": "vmd", 00:47:03.606 "config": [] 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "subsystem": "accel", 00:47:03.606 "config": [ 00:47:03.606 { 00:47:03.606 "method": "accel_set_options", 00:47:03.606 "params": { 00:47:03.606 "small_cache_size": 128, 00:47:03.606 "large_cache_size": 16, 00:47:03.606 "task_count": 2048, 00:47:03.606 "sequence_count": 2048, 00:47:03.606 "buf_count": 2048 00:47:03.606 } 00:47:03.606 } 00:47:03.606 ] 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "subsystem": "bdev", 00:47:03.606 "config": [ 00:47:03.606 { 00:47:03.606 "method": "bdev_set_options", 00:47:03.606 "params": { 00:47:03.606 "bdev_io_pool_size": 65535, 00:47:03.606 "bdev_io_cache_size": 256, 00:47:03.606 "bdev_auto_examine": true, 00:47:03.606 "iobuf_small_cache_size": 128, 00:47:03.606 "iobuf_large_cache_size": 16 00:47:03.606 } 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "method": "bdev_raid_set_options", 00:47:03.606 "params": { 00:47:03.606 "process_window_size_kb": 1024, 00:47:03.606 "process_max_bandwidth_mb_sec": 0 00:47:03.606 } 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "method": "bdev_iscsi_set_options", 00:47:03.606 "params": { 00:47:03.606 "timeout_sec": 30 00:47:03.606 } 00:47:03.606 }, 00:47:03.606 { 00:47:03.606 "method": "bdev_nvme_set_options", 00:47:03.606 "params": { 00:47:03.606 "action_on_timeout": "none", 00:47:03.606 "timeout_us": 0, 00:47:03.606 "timeout_admin_us": 0, 00:47:03.606 "keep_alive_timeout_ms": 10000, 00:47:03.606 "arbitration_burst": 0, 00:47:03.606 "low_priority_weight": 0, 00:47:03.606 "medium_priority_weight": 0, 00:47:03.606 "high_priority_weight": 0, 00:47:03.606 "nvme_adminq_poll_period_us": 10000, 00:47:03.606 "nvme_ioq_poll_period_us": 0, 00:47:03.606 "io_queue_requests": 512, 00:47:03.606 "delay_cmd_submit": true, 00:47:03.607 "transport_retry_count": 4, 00:47:03.607 "bdev_retry_count": 3, 00:47:03.607 "transport_ack_timeout": 0, 00:47:03.607 "ctrlr_loss_timeout_sec": 0, 00:47:03.607 "reconnect_delay_sec": 0, 00:47:03.607 "fast_io_fail_timeout_sec": 0, 00:47:03.607 "disable_auto_failback": false, 00:47:03.607 "generate_uuids": false, 00:47:03.607 "transport_tos": 0, 00:47:03.607 "nvme_error_stat": false, 00:47:03.607 "rdma_srq_size": 0, 00:47:03.607 "io_path_stat": false, 00:47:03.607 "allow_accel_sequence": false, 00:47:03.607 "rdma_max_cq_size": 0, 00:47:03.607 "rdma_cm_event_timeout_ms": 0, 00:47:03.607 "dhchap_digests": [ 00:47:03.607 "sha256", 00:47:03.607 "sha384", 00:47:03.607 "sha512" 00:47:03.607 ], 00:47:03.607 "dhchap_dhgroups": [ 00:47:03.607 "null", 00:47:03.607 "ffdhe2048", 00:47:03.607 "ffdhe3072", 00:47:03.607 "ffdhe4096", 00:47:03.607 "ffdhe6144", 00:47:03.607 "ffdhe8192" 00:47:03.607 ] 00:47:03.607 } 00:47:03.607 }, 00:47:03.607 { 00:47:03.607 "method": "bdev_nvme_attach_controller", 00:47:03.607 "params": { 00:47:03.607 "name": "nvme0", 00:47:03.607 "trtype": "TCP", 00:47:03.607 "adrfam": "IPv4", 00:47:03.607 "traddr": "10.0.0.2", 00:47:03.607 "trsvcid": "4420", 00:47:03.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:03.607 "prchk_reftag": false, 00:47:03.607 "prchk_guard": false, 00:47:03.607 "ctrlr_loss_timeout_sec": 0, 00:47:03.607 "reconnect_delay_sec": 0, 00:47:03.607 "fast_io_fail_timeout_sec": 0, 00:47:03.607 "psk": "key0", 00:47:03.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:03.607 "hdgst": false, 00:47:03.607 "ddgst": false, 00:47:03.607 "multipath": "multipath" 00:47:03.607 } 00:47:03.607 }, 00:47:03.607 { 00:47:03.607 "method": "bdev_nvme_set_hotplug", 00:47:03.607 "params": { 00:47:03.607 "period_us": 100000, 00:47:03.607 "enable": false 00:47:03.607 } 00:47:03.607 }, 00:47:03.607 { 00:47:03.607 "method": "bdev_enable_histogram", 00:47:03.607 "params": { 00:47:03.607 "name": "nvme0n1", 00:47:03.607 "enable": true 00:47:03.607 } 00:47:03.607 }, 00:47:03.607 { 00:47:03.607 "method": "bdev_wait_for_examine" 00:47:03.607 } 00:47:03.607 ] 00:47:03.607 }, 00:47:03.607 { 00:47:03.607 "subsystem": "nbd", 00:47:03.607 "config": [] 00:47:03.607 } 00:47:03.607 ] 00:47:03.607 }' 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 675273 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 675273 ']' 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 675273 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675273 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675273' 00:47:03.607 killing process with pid 675273 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 675273 00:47:03.607 Received shutdown signal, test time was about 1.000000 seconds 00:47:03.607 00:47:03.607 Latency(us) 00:47:03.607 [2024-12-09T04:41:57.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:03.607 [2024-12-09T04:41:57.832Z] =================================================================================================================== 00:47:03.607 [2024-12-09T04:41:57.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:03.607 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 675273 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 675247 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 675247 ']' 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 675247 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:03.865 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675247 00:47:03.865 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:03.865 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:03.865 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675247' 00:47:03.865 killing process with pid 675247 00:47:03.865 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 675247 00:47:03.865 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 675247 00:47:04.125 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:47:04.125 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:04.125 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:47:04.125 "subsystems": [ 00:47:04.125 { 00:47:04.125 "subsystem": "keyring", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "keyring_file_add_key", 00:47:04.125 "params": { 00:47:04.125 "name": "key0", 00:47:04.125 "path": "/tmp/tmp.ZUG3jTm7MP" 00:47:04.125 } 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "iobuf", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "iobuf_set_options", 00:47:04.125 "params": { 00:47:04.125 "small_pool_count": 8192, 00:47:04.125 "large_pool_count": 1024, 00:47:04.125 "small_bufsize": 8192, 00:47:04.125 "large_bufsize": 135168, 00:47:04.125 "enable_numa": false 00:47:04.125 } 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "sock", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "sock_set_default_impl", 00:47:04.125 "params": { 00:47:04.125 "impl_name": "posix" 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "sock_impl_set_options", 00:47:04.125 "params": { 00:47:04.125 "impl_name": "ssl", 00:47:04.125 "recv_buf_size": 4096, 00:47:04.125 "send_buf_size": 4096, 00:47:04.125 "enable_recv_pipe": true, 00:47:04.125 "enable_quickack": false, 00:47:04.125 "enable_placement_id": 0, 00:47:04.125 "enable_zerocopy_send_server": true, 00:47:04.125 "enable_zerocopy_send_client": false, 00:47:04.125 "zerocopy_threshold": 0, 00:47:04.125 "tls_version": 0, 00:47:04.125 "enable_ktls": false 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "sock_impl_set_options", 00:47:04.125 "params": { 00:47:04.125 "impl_name": "posix", 00:47:04.125 "recv_buf_size": 2097152, 00:47:04.125 "send_buf_size": 2097152, 00:47:04.125 "enable_recv_pipe": true, 00:47:04.125 "enable_quickack": false, 00:47:04.125 "enable_placement_id": 0, 00:47:04.125 "enable_zerocopy_send_server": true, 00:47:04.125 "enable_zerocopy_send_client": false, 00:47:04.125 "zerocopy_threshold": 0, 00:47:04.125 "tls_version": 0, 00:47:04.125 "enable_ktls": false 00:47:04.125 } 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "vmd", 00:47:04.125 "config": [] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "accel", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "accel_set_options", 00:47:04.125 "params": { 00:47:04.125 "small_cache_size": 128, 00:47:04.125 "large_cache_size": 16, 00:47:04.125 "task_count": 2048, 00:47:04.125 "sequence_count": 2048, 00:47:04.125 "buf_count": 2048 00:47:04.125 } 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "bdev", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "bdev_set_options", 00:47:04.125 "params": { 00:47:04.125 "bdev_io_pool_size": 65535, 00:47:04.125 "bdev_io_cache_size": 256, 00:47:04.125 "bdev_auto_examine": true, 00:47:04.125 "iobuf_small_cache_size": 128, 00:47:04.125 "iobuf_large_cache_size": 16 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_raid_set_options", 00:47:04.125 "params": { 00:47:04.125 "process_window_size_kb": 1024, 00:47:04.125 "process_max_bandwidth_mb_sec": 0 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_iscsi_set_options", 00:47:04.125 "params": { 00:47:04.125 "timeout_sec": 30 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_nvme_set_options", 00:47:04.125 "params": { 00:47:04.125 "action_on_timeout": "none", 00:47:04.125 "timeout_us": 0, 00:47:04.125 "timeout_admin_us": 0, 00:47:04.125 "keep_alive_timeout_ms": 10000, 00:47:04.125 "arbitration_burst": 0, 00:47:04.125 "low_priority_weight": 0, 00:47:04.125 "medium_priority_weight": 0, 00:47:04.125 "high_priority_weight": 0, 00:47:04.125 "nvme_adminq_poll_period_us": 10000, 00:47:04.125 "nvme_ioq_poll_period_us": 0, 00:47:04.125 "io_queue_requests": 0, 00:47:04.125 "delay_cmd_submit": true, 00:47:04.125 "transport_retry_count": 4, 00:47:04.125 "bdev_retry_count": 3, 00:47:04.125 "transport_ack_timeout": 0, 00:47:04.125 "ctrlr_loss_timeout_sec": 0, 00:47:04.125 "reconnect_delay_sec": 0, 00:47:04.125 "fast_io_fail_timeout_sec": 0, 00:47:04.125 "disable_auto_failback": false, 00:47:04.125 "generate_uuids": false, 00:47:04.125 "transport_tos": 0, 00:47:04.125 "nvme_error_stat": false, 00:47:04.125 "rdma_srq_size": 0, 00:47:04.125 "io_path_stat": false, 00:47:04.125 "allow_accel_sequence": false, 00:47:04.125 "rdma_max_cq_size": 0, 00:47:04.125 "rdma_cm_event_timeout_ms": 0, 00:47:04.125 "dhchap_digests": [ 00:47:04.125 "sha256", 00:47:04.125 "sha384", 00:47:04.125 "sha512" 00:47:04.125 ], 00:47:04.125 "dhchap_dhgroups": [ 00:47:04.125 "null", 00:47:04.125 "ffdhe2048", 00:47:04.125 "ffdhe3072", 00:47:04.125 "ffdhe4096", 00:47:04.125 "ffdhe6144", 00:47:04.125 "ffdhe8192" 00:47:04.125 ] 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_nvme_set_hotplug", 00:47:04.125 "params": { 00:47:04.125 "period_us": 100000, 00:47:04.125 "enable": false 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_malloc_create", 00:47:04.125 "params": { 00:47:04.125 "name": "malloc0", 00:47:04.125 "num_blocks": 8192, 00:47:04.125 "block_size": 4096, 00:47:04.125 "physical_block_size": 4096, 00:47:04.125 "uuid": "48433029-613c-49d2-a8fc-6f20950b1736", 00:47:04.125 "optimal_io_boundary": 0, 00:47:04.125 "md_size": 0, 00:47:04.125 "dif_type": 0, 00:47:04.125 "dif_is_head_of_md": false, 00:47:04.125 "dif_pi_format": 0 00:47:04.125 } 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "method": "bdev_wait_for_examine" 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "nbd", 00:47:04.125 "config": [] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "scheduler", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "framework_set_scheduler", 00:47:04.125 "params": { 00:47:04.125 "name": "static" 00:47:04.125 } 00:47:04.125 } 00:47:04.125 ] 00:47:04.125 }, 00:47:04.125 { 00:47:04.125 "subsystem": "nvmf", 00:47:04.125 "config": [ 00:47:04.125 { 00:47:04.125 "method": "nvmf_set_config", 00:47:04.125 "params": { 00:47:04.125 "discovery_filter": "match_any", 00:47:04.125 "admin_cmd_passthru": { 00:47:04.125 "identify_ctrlr": false 00:47:04.126 }, 00:47:04.126 "dhchap_digests": [ 00:47:04.126 "sha256", 00:47:04.126 "sha384", 00:47:04.126 "sha512" 00:47:04.126 ], 00:47:04.126 "dhchap_dhgroups": [ 00:47:04.126 "null", 00:47:04.126 "ffdhe2048", 00:47:04.126 "ffdhe3072", 00:47:04.126 "ffdhe4096", 00:47:04.126 "ffdhe6144", 00:47:04.126 "ffdhe8192" 00:47:04.126 ] 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_set_max_subsystems", 00:47:04.126 "params": { 00:47:04.126 "max_subsystems": 1024 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_set_crdt", 00:47:04.126 "params": { 00:47:04.126 "crdt1": 0, 00:47:04.126 "crdt2": 0, 00:47:04.126 "crdt3": 0 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_create_transport", 00:47:04.126 "params": { 00:47:04.126 "trtype": "TCP", 00:47:04.126 "max_queue_depth": 128, 00:47:04.126 "max_io_qpairs_per_ctrlr": 127, 00:47:04.126 "in_capsule_data_size": 4096, 00:47:04.126 "max_io_size": 131072, 00:47:04.126 "io_unit_size": 131072, 00:47:04.126 "max_aq_depth": 128, 00:47:04.126 "num_shared_buffers": 511, 00:47:04.126 "buf_cache_size": 4294967295, 00:47:04.126 "dif_insert_or_strip": false, 00:47:04.126 "zcopy": false, 00:47:04.126 "c2h_success": false, 00:47:04.126 "sock_priority": 0, 00:47:04.126 "abort_timeout_sec": 1, 00:47:04.126 "ack_timeout": 0, 00:47:04.126 "data_wr_pool_size": 0 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_create_subsystem", 00:47:04.126 "params": { 00:47:04.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:04.126 "allow_any_host": false, 00:47:04.126 "serial_number": "00000000000000000000", 00:47:04.126 "model_number": "SPDK bdev Controller", 00:47:04.126 "max_namespaces": 32, 00:47:04.126 "min_cntlid": 1, 00:47:04.126 "max_cntlid": 65519, 00:47:04.126 "ana_reporting": false 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_subsystem_add_host", 00:47:04.126 "params": { 00:47:04.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:04.126 "host": "nqn.2016-06.io.spdk:host1", 00:47:04.126 "psk": "key0" 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_subsystem_add_ns", 00:47:04.126 "params": { 00:47:04.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:04.126 "namespace": { 00:47:04.126 "nsid": 1, 00:47:04.126 "bdev_name": "malloc0", 00:47:04.126 "nguid": "48433029613C49D2A8FC6F20950B1736", 00:47:04.126 "uuid": "48433029-613c-49d2-a8fc-6f20950b1736", 00:47:04.126 "no_auto_visible": false 00:47:04.126 } 00:47:04.126 } 00:47:04.126 }, 00:47:04.126 { 00:47:04.126 "method": "nvmf_subsystem_add_listener", 00:47:04.126 "params": { 00:47:04.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:04.126 "listen_address": { 00:47:04.126 "trtype": "TCP", 00:47:04.126 "adrfam": "IPv4", 00:47:04.126 "traddr": "10.0.0.2", 00:47:04.126 "trsvcid": "4420" 00:47:04.126 }, 00:47:04.126 "secure_channel": false, 00:47:04.126 "sock_impl": "ssl" 00:47:04.126 } 00:47:04.126 } 00:47:04.126 ] 00:47:04.126 } 00:47:04.126 ] 00:47:04.126 }' 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=675683 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 675683 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 675683 ']' 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:04.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:04.126 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:04.126 [2024-12-09 05:41:58.319045] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:04.126 [2024-12-09 05:41:58.319141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:04.385 [2024-12-09 05:41:58.391731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:04.385 [2024-12-09 05:41:58.442436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:04.385 [2024-12-09 05:41:58.442497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:04.385 [2024-12-09 05:41:58.442521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:04.385 [2024-12-09 05:41:58.442532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:04.385 [2024-12-09 05:41:58.442543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:04.385 [2024-12-09 05:41:58.443127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:04.643 [2024-12-09 05:41:58.686268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:04.643 [2024-12-09 05:41:58.718309] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:04.643 [2024-12-09 05:41:58.718582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=675835 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 675835 /var/tmp/bdevperf.sock 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 675835 ']' 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:05.209 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:47:05.209 "subsystems": [ 00:47:05.209 { 00:47:05.209 "subsystem": "keyring", 00:47:05.209 "config": [ 00:47:05.209 { 00:47:05.209 "method": "keyring_file_add_key", 00:47:05.209 "params": { 00:47:05.209 "name": "key0", 00:47:05.209 "path": "/tmp/tmp.ZUG3jTm7MP" 00:47:05.209 } 00:47:05.209 } 00:47:05.209 ] 00:47:05.209 }, 00:47:05.209 { 00:47:05.209 "subsystem": "iobuf", 00:47:05.209 "config": [ 00:47:05.209 { 00:47:05.209 "method": "iobuf_set_options", 00:47:05.209 "params": { 00:47:05.209 "small_pool_count": 8192, 00:47:05.209 "large_pool_count": 1024, 00:47:05.209 "small_bufsize": 8192, 00:47:05.209 "large_bufsize": 135168, 00:47:05.209 "enable_numa": false 00:47:05.209 } 00:47:05.209 } 00:47:05.209 ] 00:47:05.209 }, 00:47:05.209 { 00:47:05.209 "subsystem": "sock", 00:47:05.209 "config": [ 00:47:05.209 { 00:47:05.209 "method": "sock_set_default_impl", 00:47:05.209 "params": { 00:47:05.209 "impl_name": "posix" 00:47:05.209 } 00:47:05.209 }, 00:47:05.209 { 00:47:05.209 "method": "sock_impl_set_options", 00:47:05.209 "params": { 00:47:05.209 "impl_name": "ssl", 00:47:05.209 "recv_buf_size": 4096, 00:47:05.209 "send_buf_size": 4096, 00:47:05.209 "enable_recv_pipe": true, 00:47:05.209 "enable_quickack": false, 00:47:05.209 "enable_placement_id": 0, 00:47:05.209 "enable_zerocopy_send_server": true, 00:47:05.209 "enable_zerocopy_send_client": false, 00:47:05.209 "zerocopy_threshold": 0, 00:47:05.209 "tls_version": 0, 00:47:05.209 "enable_ktls": false 00:47:05.209 } 00:47:05.209 }, 00:47:05.209 { 00:47:05.209 "method": "sock_impl_set_options", 00:47:05.209 "params": { 00:47:05.209 "impl_name": "posix", 00:47:05.209 "recv_buf_size": 2097152, 00:47:05.209 "send_buf_size": 2097152, 00:47:05.209 "enable_recv_pipe": true, 00:47:05.209 "enable_quickack": false, 00:47:05.209 "enable_placement_id": 0, 00:47:05.209 "enable_zerocopy_send_server": true, 00:47:05.209 "enable_zerocopy_send_client": false, 00:47:05.209 "zerocopy_threshold": 0, 00:47:05.209 "tls_version": 0, 00:47:05.209 "enable_ktls": false 00:47:05.209 } 00:47:05.209 } 00:47:05.209 ] 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "subsystem": "vmd", 00:47:05.210 "config": [] 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "subsystem": "accel", 00:47:05.210 "config": [ 00:47:05.210 { 00:47:05.210 "method": "accel_set_options", 00:47:05.210 "params": { 00:47:05.210 "small_cache_size": 128, 00:47:05.210 "large_cache_size": 16, 00:47:05.210 "task_count": 2048, 00:47:05.210 "sequence_count": 2048, 00:47:05.210 "buf_count": 2048 00:47:05.210 } 00:47:05.210 } 00:47:05.210 ] 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "subsystem": "bdev", 00:47:05.210 "config": [ 00:47:05.210 { 00:47:05.210 "method": "bdev_set_options", 00:47:05.210 "params": { 00:47:05.210 "bdev_io_pool_size": 65535, 00:47:05.210 "bdev_io_cache_size": 256, 00:47:05.210 "bdev_auto_examine": true, 00:47:05.210 "iobuf_small_cache_size": 128, 00:47:05.210 "iobuf_large_cache_size": 16 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_raid_set_options", 00:47:05.210 "params": { 00:47:05.210 "process_window_size_kb": 1024, 00:47:05.210 "process_max_bandwidth_mb_sec": 0 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_iscsi_set_options", 00:47:05.210 "params": { 00:47:05.210 "timeout_sec": 30 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_nvme_set_options", 00:47:05.210 "params": { 00:47:05.210 "action_on_timeout": "none", 00:47:05.210 "timeout_us": 0, 00:47:05.210 "timeout_admin_us": 0, 00:47:05.210 "keep_alive_timeout_ms": 10000, 00:47:05.210 "arbitration_burst": 0, 00:47:05.210 "low_priority_weight": 0, 00:47:05.210 "medium_priority_weight": 0, 00:47:05.210 "high_priority_weight": 0, 00:47:05.210 "nvme_adminq_poll_period_us": 10000, 00:47:05.210 "nvme_ioq_poll_period_us": 0, 00:47:05.210 "io_queue_requests": 512, 00:47:05.210 "delay_cmd_submit": true, 00:47:05.210 "transport_retry_count": 4, 00:47:05.210 "bdev_retry_count": 3, 00:47:05.210 "transport_ack_timeout": 0, 00:47:05.210 "ctrlr_loss_timeout_sec": 0, 00:47:05.210 "reconnect_delay_sec": 0, 00:47:05.210 "fast_io_fail_timeout_sec": 0, 00:47:05.210 "disable_auto_failback": false, 00:47:05.210 "generate_uuids": false, 00:47:05.210 "transport_tos": 0, 00:47:05.210 "nvme_error_stat": false, 00:47:05.210 "rdma_srq_size": 0, 00:47:05.210 "io_path_stat": false, 00:47:05.210 "allow_accel_sequence": false, 00:47:05.210 "rdma_max_cq_size": 0, 00:47:05.210 "rdma_cm_event_timeout_ms": 0, 00:47:05.210 "dhchap_digests": [ 00:47:05.210 "sha256", 00:47:05.210 "sha384", 00:47:05.210 "sha512" 00:47:05.210 ], 00:47:05.210 "dhchap_dhgroups": [ 00:47:05.210 "null", 00:47:05.210 "ffdhe2048", 00:47:05.210 "ffdhe3072", 00:47:05.210 "ffdhe4096", 00:47:05.210 "ffdhe6144", 00:47:05.210 "ffdhe8192" 00:47:05.210 ] 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_nvme_attach_controller", 00:47:05.210 "params": { 00:47:05.210 "name": "nvme0", 00:47:05.210 "trtype": "TCP", 00:47:05.210 "adrfam": "IPv4", 00:47:05.210 "traddr": "10.0.0.2", 00:47:05.210 "trsvcid": "4420", 00:47:05.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:05.210 "prchk_reftag": false, 00:47:05.210 "prchk_guard": false, 00:47:05.210 "ctrlr_loss_timeout_sec": 0, 00:47:05.210 "reconnect_delay_sec": 0, 00:47:05.210 "fast_io_fail_timeout_sec": 0, 00:47:05.210 "psk": "key0", 00:47:05.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:05.210 "hdgst": false, 00:47:05.210 "ddgst": false, 00:47:05.210 "multipath": "multipath" 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_nvme_set_hotplug", 00:47:05.210 "params": { 00:47:05.210 "period_us": 100000, 00:47:05.210 "enable": false 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_enable_histogram", 00:47:05.210 "params": { 00:47:05.210 "name": "nvme0n1", 00:47:05.210 "enable": true 00:47:05.210 } 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "method": "bdev_wait_for_examine" 00:47:05.210 } 00:47:05.210 ] 00:47:05.210 }, 00:47:05.210 { 00:47:05.210 "subsystem": "nbd", 00:47:05.210 "config": [] 00:47:05.210 } 00:47:05.210 ] 00:47:05.210 }' 00:47:05.210 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:05.210 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:05.210 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:05.210 [2024-12-09 05:41:59.410778] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:05.210 [2024-12-09 05:41:59.410868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675835 ] 00:47:05.468 [2024-12-09 05:41:59.477094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.468 [2024-12-09 05:41:59.534903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:05.726 [2024-12-09 05:41:59.721462] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:05.726 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:05.726 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:47:05.726 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:05.726 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:47:05.984 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:47:05.984 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:06.241 Running I/O for 1 seconds... 00:47:07.172 3581.00 IOPS, 13.99 MiB/s 00:47:07.173 Latency(us) 00:47:07.173 [2024-12-09T04:42:01.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:07.173 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:47:07.173 Verification LBA range: start 0x0 length 0x2000 00:47:07.173 nvme0n1 : 1.02 3630.36 14.18 0.00 0.00 34907.91 8204.14 29903.83 00:47:07.173 [2024-12-09T04:42:01.398Z] =================================================================================================================== 00:47:07.173 [2024-12-09T04:42:01.398Z] Total : 3630.36 14.18 0.00 0.00 34907.91 8204.14 29903.83 00:47:07.173 { 00:47:07.173 "results": [ 00:47:07.173 { 00:47:07.173 "job": "nvme0n1", 00:47:07.173 "core_mask": "0x2", 00:47:07.173 "workload": "verify", 00:47:07.173 "status": "finished", 00:47:07.173 "verify_range": { 00:47:07.173 "start": 0, 00:47:07.173 "length": 8192 00:47:07.173 }, 00:47:07.173 "queue_depth": 128, 00:47:07.173 "io_size": 4096, 00:47:07.173 "runtime": 1.021662, 00:47:07.173 "iops": 3630.3591598787075, 00:47:07.173 "mibps": 14.181090468276201, 00:47:07.173 "io_failed": 0, 00:47:07.173 "io_timeout": 0, 00:47:07.173 "avg_latency_us": 34907.91469318874, 00:47:07.173 "min_latency_us": 8204.136296296296, 00:47:07.173 "max_latency_us": 29903.834074074075 00:47:07.173 } 00:47:07.173 ], 00:47:07.173 "core_count": 1 00:47:07.173 } 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:47:07.173 nvmf_trace.0 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 675835 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 675835 ']' 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 675835 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675835 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675835' 00:47:07.173 killing process with pid 675835 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 675835 00:47:07.173 Received shutdown signal, test time was about 1.000000 seconds 00:47:07.173 00:47:07.173 Latency(us) 00:47:07.173 [2024-12-09T04:42:01.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:07.173 [2024-12-09T04:42:01.398Z] =================================================================================================================== 00:47:07.173 [2024-12-09T04:42:01.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:07.173 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 675835 00:47:07.430 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:07.431 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:07.431 rmmod nvme_tcp 00:47:07.688 rmmod nvme_fabrics 00:47:07.688 rmmod nvme_keyring 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 675683 ']' 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 675683 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 675683 ']' 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 675683 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675683 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675683' 00:47:07.688 killing process with pid 675683 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 675683 00:47:07.688 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 675683 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:07.947 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:09.854 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:09.854 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.A7EnXvIjPL /tmp/tmp.WwiEM5U51K /tmp/tmp.ZUG3jTm7MP 00:47:09.854 00:47:09.854 real 1m23.837s 00:47:09.854 user 2m21.583s 00:47:09.854 sys 0m24.263s 00:47:09.854 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:09.854 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:09.854 ************************************ 00:47:09.854 END TEST nvmf_tls 00:47:09.854 ************************************ 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:10.113 ************************************ 00:47:10.113 START TEST nvmf_fips 00:47:10.113 ************************************ 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:47:10.113 * Looking for test storage... 00:47:10.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:10.113 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:10.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.114 --rc genhtml_branch_coverage=1 00:47:10.114 --rc genhtml_function_coverage=1 00:47:10.114 --rc genhtml_legend=1 00:47:10.114 --rc geninfo_all_blocks=1 00:47:10.114 --rc geninfo_unexecuted_blocks=1 00:47:10.114 00:47:10.114 ' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:10.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.114 --rc genhtml_branch_coverage=1 00:47:10.114 --rc genhtml_function_coverage=1 00:47:10.114 --rc genhtml_legend=1 00:47:10.114 --rc geninfo_all_blocks=1 00:47:10.114 --rc geninfo_unexecuted_blocks=1 00:47:10.114 00:47:10.114 ' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:10.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.114 --rc genhtml_branch_coverage=1 00:47:10.114 --rc genhtml_function_coverage=1 00:47:10.114 --rc genhtml_legend=1 00:47:10.114 --rc geninfo_all_blocks=1 00:47:10.114 --rc geninfo_unexecuted_blocks=1 00:47:10.114 00:47:10.114 ' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:10.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.114 --rc genhtml_branch_coverage=1 00:47:10.114 --rc genhtml_function_coverage=1 00:47:10.114 --rc genhtml_legend=1 00:47:10.114 --rc geninfo_all_blocks=1 00:47:10.114 --rc geninfo_unexecuted_blocks=1 00:47:10.114 00:47:10.114 ' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:10.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:47:10.114 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:47:10.115 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:47:10.374 Error setting digest 00:47:10.374 40524CF7127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:47:10.374 40524CF7127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:47:10.374 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:47:10.375 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:12.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:12.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:12.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:12.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:12.283 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:12.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:12.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:47:12.541 00:47:12.541 --- 10.0.0.2 ping statistics --- 00:47:12.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:12.541 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:47:12.541 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:12.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:12.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:47:12.541 00:47:12.541 --- 10.0.0.1 ping statistics --- 00:47:12.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:12.542 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=678186 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 678186 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 678186 ']' 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:12.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:12.542 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:12.542 [2024-12-09 05:42:06.700374] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:12.542 [2024-12-09 05:42:06.700444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:12.799 [2024-12-09 05:42:06.773388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:12.799 [2024-12-09 05:42:06.831587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:12.799 [2024-12-09 05:42:06.831663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:12.799 [2024-12-09 05:42:06.831677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:12.799 [2024-12-09 05:42:06.831689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:12.799 [2024-12-09 05:42:06.831699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:12.799 [2024-12-09 05:42:06.832325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.YHt 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.YHt 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.YHt 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.YHt 00:47:12.799 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:47:13.056 [2024-12-09 05:42:07.242247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:13.056 [2024-12-09 05:42:07.258231] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:13.056 [2024-12-09 05:42:07.258540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:13.313 malloc0 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=678491 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 678491 /var/tmp/bdevperf.sock 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 678491 ']' 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:13.313 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:13.314 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:13.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:13.314 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:13.314 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:13.314 [2024-12-09 05:42:07.389685] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:13.314 [2024-12-09 05:42:07.389774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678491 ] 00:47:13.314 [2024-12-09 05:42:07.456870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:13.314 [2024-12-09 05:42:07.515862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:13.571 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:13.571 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:47:13.571 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.YHt 00:47:13.828 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:47:14.086 [2024-12-09 05:42:08.185467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:14.086 TLSTESTn1 00:47:14.086 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:14.342 Running I/O for 10 seconds... 00:47:16.207 3330.00 IOPS, 13.01 MiB/s [2024-12-09T04:42:11.810Z] 3466.50 IOPS, 13.54 MiB/s [2024-12-09T04:42:12.743Z] 3480.00 IOPS, 13.59 MiB/s [2024-12-09T04:42:13.676Z] 3477.75 IOPS, 13.58 MiB/s [2024-12-09T04:42:14.608Z] 3476.00 IOPS, 13.58 MiB/s [2024-12-09T04:42:15.540Z] 3471.83 IOPS, 13.56 MiB/s [2024-12-09T04:42:16.469Z] 3473.43 IOPS, 13.57 MiB/s [2024-12-09T04:42:17.841Z] 3478.62 IOPS, 13.59 MiB/s [2024-12-09T04:42:18.773Z] 3478.78 IOPS, 13.59 MiB/s [2024-12-09T04:42:18.773Z] 3482.10 IOPS, 13.60 MiB/s 00:47:24.548 Latency(us) 00:47:24.548 [2024-12-09T04:42:18.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:24.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:47:24.548 Verification LBA range: start 0x0 length 0x2000 00:47:24.548 TLSTESTn1 : 10.02 3486.87 13.62 0.00 0.00 36642.76 9514.86 35923.44 00:47:24.548 [2024-12-09T04:42:18.773Z] =================================================================================================================== 00:47:24.548 [2024-12-09T04:42:18.773Z] Total : 3486.87 13.62 0.00 0.00 36642.76 9514.86 35923.44 00:47:24.548 { 00:47:24.548 "results": [ 00:47:24.548 { 00:47:24.548 "job": "TLSTESTn1", 00:47:24.548 "core_mask": "0x4", 00:47:24.548 "workload": "verify", 00:47:24.548 "status": "finished", 00:47:24.548 "verify_range": { 00:47:24.548 "start": 0, 00:47:24.548 "length": 8192 00:47:24.548 }, 00:47:24.548 "queue_depth": 128, 00:47:24.548 "io_size": 4096, 00:47:24.548 "runtime": 10.022755, 00:47:24.548 "iops": 3486.865637242455, 00:47:24.548 "mibps": 13.620568895478339, 00:47:24.548 "io_failed": 0, 00:47:24.548 "io_timeout": 0, 00:47:24.548 "avg_latency_us": 36642.760888685414, 00:47:24.548 "min_latency_us": 9514.856296296297, 00:47:24.548 "max_latency_us": 35923.43703703704 00:47:24.548 } 00:47:24.548 ], 00:47:24.548 "core_count": 1 00:47:24.548 } 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:47:24.548 nvmf_trace.0 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 678491 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 678491 ']' 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 678491 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 678491 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 678491' 00:47:24.548 killing process with pid 678491 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 678491 00:47:24.548 Received shutdown signal, test time was about 10.000000 seconds 00:47:24.548 00:47:24.548 Latency(us) 00:47:24.548 [2024-12-09T04:42:18.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:24.548 [2024-12-09T04:42:18.773Z] =================================================================================================================== 00:47:24.548 [2024-12-09T04:42:18.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:24.548 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 678491 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:24.805 rmmod nvme_tcp 00:47:24.805 rmmod nvme_fabrics 00:47:24.805 rmmod nvme_keyring 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 678186 ']' 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 678186 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 678186 ']' 00:47:24.805 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 678186 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 678186 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 678186' 00:47:24.806 killing process with pid 678186 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 678186 00:47:24.806 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 678186 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:25.063 05:42:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.YHt 00:47:27.598 00:47:27.598 real 0m17.087s 00:47:27.598 user 0m22.807s 00:47:27.598 sys 0m5.336s 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:27.598 ************************************ 00:47:27.598 END TEST nvmf_fips 00:47:27.598 ************************************ 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:27.598 ************************************ 00:47:27.598 START TEST nvmf_control_msg_list 00:47:27.598 ************************************ 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:47:27.598 * Looking for test storage... 00:47:27.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:27.598 --rc genhtml_branch_coverage=1 00:47:27.598 --rc genhtml_function_coverage=1 00:47:27.598 --rc genhtml_legend=1 00:47:27.598 --rc geninfo_all_blocks=1 00:47:27.598 --rc geninfo_unexecuted_blocks=1 00:47:27.598 00:47:27.598 ' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:27.598 --rc genhtml_branch_coverage=1 00:47:27.598 --rc genhtml_function_coverage=1 00:47:27.598 --rc genhtml_legend=1 00:47:27.598 --rc geninfo_all_blocks=1 00:47:27.598 --rc geninfo_unexecuted_blocks=1 00:47:27.598 00:47:27.598 ' 00:47:27.598 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:27.598 --rc genhtml_branch_coverage=1 00:47:27.598 --rc genhtml_function_coverage=1 00:47:27.598 --rc genhtml_legend=1 00:47:27.598 --rc geninfo_all_blocks=1 00:47:27.598 --rc geninfo_unexecuted_blocks=1 00:47:27.598 00:47:27.599 ' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:27.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:27.599 --rc genhtml_branch_coverage=1 00:47:27.599 --rc genhtml_function_coverage=1 00:47:27.599 --rc genhtml_legend=1 00:47:27.599 --rc geninfo_all_blocks=1 00:47:27.599 --rc geninfo_unexecuted_blocks=1 00:47:27.599 00:47:27.599 ' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:27.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:47:27.599 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:29.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:29.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:29.505 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:29.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:29.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:29.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:29.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:47:29.506 00:47:29.506 --- 10.0.0.2 ping statistics --- 00:47:29.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.506 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:29.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:29.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:47:29.506 00:47:29.506 --- 10.0.0.1 ping statistics --- 00:47:29.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.506 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=682106 00:47:29.506 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 682106 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 682106 ']' 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:29.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:29.507 05:42:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:29.765 [2024-12-09 05:42:23.764879] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:29.765 [2024-12-09 05:42:23.764951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:29.765 [2024-12-09 05:42:23.835436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:29.765 [2024-12-09 05:42:23.890519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:29.766 [2024-12-09 05:42:23.890588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:29.766 [2024-12-09 05:42:23.890602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:29.766 [2024-12-09 05:42:23.890612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:29.766 [2024-12-09 05:42:23.890621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:29.766 [2024-12-09 05:42:23.891184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 [2024-12-09 05:42:24.033875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 Malloc0 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:30.024 [2024-12-09 05:42:24.073639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=682128 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=682129 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=682130 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 682128 00:47:30.024 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:30.024 [2024-12-09 05:42:24.142415] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:47:30.024 [2024-12-09 05:42:24.142720] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:47:30.024 [2024-12-09 05:42:24.152131] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:47:30.978 Initializing NVMe Controllers 00:47:30.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:47:30.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:47:30.978 Initialization complete. Launching workers. 00:47:30.978 ======================================================== 00:47:30.978 Latency(us) 00:47:30.978 Device Information : IOPS MiB/s Average min max 00:47:30.978 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40904.68 40836.32 41076.32 00:47:30.978 ======================================================== 00:47:30.978 Total : 25.00 0.10 40904.68 40836.32 41076.32 00:47:30.978 00:47:31.235 Initializing NVMe Controllers 00:47:31.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:47:31.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:47:31.235 Initialization complete. Launching workers. 00:47:31.235 ======================================================== 00:47:31.235 Latency(us) 00:47:31.235 Device Information : IOPS MiB/s Average min max 00:47:31.235 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5777.99 22.57 172.68 156.98 296.23 00:47:31.235 ======================================================== 00:47:31.235 Total : 5777.99 22.57 172.68 156.98 296.23 00:47:31.235 00:47:31.235 Initializing NVMe Controllers 00:47:31.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:47:31.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:47:31.235 Initialization complete. Launching workers. 00:47:31.235 ======================================================== 00:47:31.235 Latency(us) 00:47:31.235 Device Information : IOPS MiB/s Average min max 00:47:31.235 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40871.45 40189.93 40953.49 00:47:31.235 ======================================================== 00:47:31.235 Total : 25.00 0.10 40871.45 40189.93 40953.49 00:47:31.235 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 682129 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 682130 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:31.235 rmmod nvme_tcp 00:47:31.235 rmmod nvme_fabrics 00:47:31.235 rmmod nvme_keyring 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 682106 ']' 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 682106 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 682106 ']' 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 682106 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682106 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682106' 00:47:31.235 killing process with pid 682106 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 682106 00:47:31.235 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 682106 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:31.494 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:34.025 00:47:34.025 real 0m6.437s 00:47:34.025 user 0m5.735s 00:47:34.025 sys 0m2.573s 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:47:34.025 ************************************ 00:47:34.025 END TEST nvmf_control_msg_list 00:47:34.025 ************************************ 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:34.025 ************************************ 00:47:34.025 START TEST nvmf_wait_for_buf 00:47:34.025 ************************************ 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:47:34.025 * Looking for test storage... 00:47:34.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:47:34.025 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:34.026 --rc genhtml_branch_coverage=1 00:47:34.026 --rc genhtml_function_coverage=1 00:47:34.026 --rc genhtml_legend=1 00:47:34.026 --rc geninfo_all_blocks=1 00:47:34.026 --rc geninfo_unexecuted_blocks=1 00:47:34.026 00:47:34.026 ' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:34.026 --rc genhtml_branch_coverage=1 00:47:34.026 --rc genhtml_function_coverage=1 00:47:34.026 --rc genhtml_legend=1 00:47:34.026 --rc geninfo_all_blocks=1 00:47:34.026 --rc geninfo_unexecuted_blocks=1 00:47:34.026 00:47:34.026 ' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:34.026 --rc genhtml_branch_coverage=1 00:47:34.026 --rc genhtml_function_coverage=1 00:47:34.026 --rc genhtml_legend=1 00:47:34.026 --rc geninfo_all_blocks=1 00:47:34.026 --rc geninfo_unexecuted_blocks=1 00:47:34.026 00:47:34.026 ' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:34.026 --rc genhtml_branch_coverage=1 00:47:34.026 --rc genhtml_function_coverage=1 00:47:34.026 --rc genhtml_legend=1 00:47:34.026 --rc geninfo_all_blocks=1 00:47:34.026 --rc geninfo_unexecuted_blocks=1 00:47:34.026 00:47:34.026 ' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:34.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:47:34.026 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:35.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:35.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:35.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:35.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:35.922 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:36.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:36.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:47:36.180 00:47:36.180 --- 10.0.0.2 ping statistics --- 00:47:36.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.180 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:36.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:36.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:47:36.180 00:47:36.180 --- 10.0.0.1 ping statistics --- 00:47:36.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:36.180 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=684329 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 684329 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 684329 ']' 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:36.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:36.180 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.180 [2024-12-09 05:42:30.309259] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:36.180 [2024-12-09 05:42:30.309363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:36.180 [2024-12-09 05:42:30.403815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:36.438 [2024-12-09 05:42:30.475356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:36.438 [2024-12-09 05:42:30.475421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:36.438 [2024-12-09 05:42:30.475453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:36.438 [2024-12-09 05:42:30.475476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:36.438 [2024-12-09 05:42:30.475508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:36.438 [2024-12-09 05:42:30.476325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.438 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:36.438 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:47:36.438 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:36.438 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:36.438 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.696 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 Malloc0 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 [2024-12-09 05:42:30.788766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:36.697 [2024-12-09 05:42:30.812983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:36.697 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:36.697 [2024-12-09 05:42:30.902382] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:47:38.067 Initializing NVMe Controllers 00:47:38.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:47:38.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:47:38.067 Initialization complete. Launching workers. 00:47:38.067 ======================================================== 00:47:38.067 Latency(us) 00:47:38.067 Device Information : IOPS MiB/s Average min max 00:47:38.067 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.57 15.45 33529.77 29968.52 70797.24 00:47:38.067 ======================================================== 00:47:38.067 Total : 123.57 15.45 33529.77 29968.52 70797.24 00:47:38.067 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:38.325 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:38.325 rmmod nvme_tcp 00:47:38.325 rmmod nvme_fabrics 00:47:38.325 rmmod nvme_keyring 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 684329 ']' 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 684329 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 684329 ']' 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 684329 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684329 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684329' 00:47:38.326 killing process with pid 684329 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 684329 00:47:38.326 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 684329 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:38.586 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:40.653 00:47:40.653 real 0m7.087s 00:47:40.653 user 0m3.486s 00:47:40.653 sys 0m2.159s 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:47:40.653 ************************************ 00:47:40.653 END TEST nvmf_wait_for_buf 00:47:40.653 ************************************ 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:47:40.653 05:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:47:40.654 05:42:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:47:40.654 05:42:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:47:43.186 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:43.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:43.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:43.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:43.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:47:43.187 ************************************ 00:47:43.187 START TEST nvmf_perf_adq 00:47:43.187 ************************************ 00:47:43.187 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:47:43.187 * Looking for test storage... 00:47:43.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:43.187 --rc genhtml_branch_coverage=1 00:47:43.187 --rc genhtml_function_coverage=1 00:47:43.187 --rc genhtml_legend=1 00:47:43.187 --rc geninfo_all_blocks=1 00:47:43.187 --rc geninfo_unexecuted_blocks=1 00:47:43.187 00:47:43.187 ' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:43.187 --rc genhtml_branch_coverage=1 00:47:43.187 --rc genhtml_function_coverage=1 00:47:43.187 --rc genhtml_legend=1 00:47:43.187 --rc geninfo_all_blocks=1 00:47:43.187 --rc geninfo_unexecuted_blocks=1 00:47:43.187 00:47:43.187 ' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:43.187 --rc genhtml_branch_coverage=1 00:47:43.187 --rc genhtml_function_coverage=1 00:47:43.187 --rc genhtml_legend=1 00:47:43.187 --rc geninfo_all_blocks=1 00:47:43.187 --rc geninfo_unexecuted_blocks=1 00:47:43.187 00:47:43.187 ' 00:47:43.187 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:43.188 --rc genhtml_branch_coverage=1 00:47:43.188 --rc genhtml_function_coverage=1 00:47:43.188 --rc genhtml_legend=1 00:47:43.188 --rc geninfo_all_blocks=1 00:47:43.188 --rc geninfo_unexecuted_blocks=1 00:47:43.188 00:47:43.188 ' 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:43.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:47:43.188 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:45.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:45.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:45.092 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:45.093 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:45.093 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:47:45.093 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:47:45.657 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:47:48.180 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:47:53.455 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:47:53.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:53.455 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:47:53.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:47:53.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:53.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:53.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:47:53.456 00:47:53.456 --- 10.0.0.2 ping statistics --- 00:47:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:53.456 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:53.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:53.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:47:53.456 00:47:53.456 --- 10.0.0.1 ping statistics --- 00:47:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:53.456 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=689174 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 689174 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 689174 ']' 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:53.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.456 [2024-12-09 05:42:47.360342] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:47:53.456 [2024-12-09 05:42:47.360426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:53.456 [2024-12-09 05:42:47.432977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:53.456 [2024-12-09 05:42:47.489647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:53.456 [2024-12-09 05:42:47.489704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:53.456 [2024-12-09 05:42:47.489743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:53.456 [2024-12-09 05:42:47.489755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:53.456 [2024-12-09 05:42:47.489763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:53.456 [2024-12-09 05:42:47.491328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:53.456 [2024-12-09 05:42:47.491358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:53.456 [2024-12-09 05:42:47.491422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:53.456 [2024-12-09 05:42:47.491426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.456 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 [2024-12-09 05:42:47.762834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 Malloc1 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:53.715 [2024-12-09 05:42:47.822622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=689209 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:47:53.715 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:55.615 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:47:55.615 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:55.615 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:47:55.872 "tick_rate": 2700000000, 00:47:55.872 "poll_groups": [ 00:47:55.872 { 00:47:55.872 "name": "nvmf_tgt_poll_group_000", 00:47:55.872 "admin_qpairs": 1, 00:47:55.872 "io_qpairs": 1, 00:47:55.872 "current_admin_qpairs": 1, 00:47:55.872 "current_io_qpairs": 1, 00:47:55.872 "pending_bdev_io": 0, 00:47:55.872 "completed_nvme_io": 19998, 00:47:55.872 "transports": [ 00:47:55.872 { 00:47:55.872 "trtype": "TCP" 00:47:55.872 } 00:47:55.872 ] 00:47:55.872 }, 00:47:55.872 { 00:47:55.872 "name": "nvmf_tgt_poll_group_001", 00:47:55.872 "admin_qpairs": 0, 00:47:55.872 "io_qpairs": 1, 00:47:55.872 "current_admin_qpairs": 0, 00:47:55.872 "current_io_qpairs": 1, 00:47:55.872 "pending_bdev_io": 0, 00:47:55.872 "completed_nvme_io": 18732, 00:47:55.872 "transports": [ 00:47:55.872 { 00:47:55.872 "trtype": "TCP" 00:47:55.872 } 00:47:55.872 ] 00:47:55.872 }, 00:47:55.872 { 00:47:55.872 "name": "nvmf_tgt_poll_group_002", 00:47:55.872 "admin_qpairs": 0, 00:47:55.872 "io_qpairs": 1, 00:47:55.872 "current_admin_qpairs": 0, 00:47:55.872 "current_io_qpairs": 1, 00:47:55.872 "pending_bdev_io": 0, 00:47:55.872 "completed_nvme_io": 19914, 00:47:55.872 "transports": [ 00:47:55.872 { 00:47:55.872 "trtype": "TCP" 00:47:55.872 } 00:47:55.872 ] 00:47:55.872 }, 00:47:55.872 { 00:47:55.872 "name": "nvmf_tgt_poll_group_003", 00:47:55.872 "admin_qpairs": 0, 00:47:55.872 "io_qpairs": 1, 00:47:55.872 "current_admin_qpairs": 0, 00:47:55.872 "current_io_qpairs": 1, 00:47:55.872 "pending_bdev_io": 0, 00:47:55.872 "completed_nvme_io": 19512, 00:47:55.872 "transports": [ 00:47:55.872 { 00:47:55.872 "trtype": "TCP" 00:47:55.872 } 00:47:55.872 ] 00:47:55.872 } 00:47:55.872 ] 00:47:55.872 }' 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:47:55.872 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 689209 00:48:03.972 Initializing NVMe Controllers 00:48:03.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:03.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:48:03.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:48:03.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:48:03.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:48:03.972 Initialization complete. Launching workers. 00:48:03.972 ======================================================== 00:48:03.972 Latency(us) 00:48:03.972 Device Information : IOPS MiB/s Average min max 00:48:03.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10280.90 40.16 6225.84 2471.28 10346.08 00:48:03.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9917.10 38.74 6454.86 2557.12 10133.81 00:48:03.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10448.80 40.82 6124.79 2659.28 10081.14 00:48:03.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10485.60 40.96 6103.31 2358.45 10252.25 00:48:03.972 ======================================================== 00:48:03.972 Total : 41132.39 160.67 6224.15 2358.45 10346.08 00:48:03.972 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:03.972 rmmod nvme_tcp 00:48:03.972 rmmod nvme_fabrics 00:48:03.972 rmmod nvme_keyring 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 689174 ']' 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 689174 00:48:03.972 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 689174 ']' 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 689174 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689174 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689174' 00:48:03.973 killing process with pid 689174 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 689174 00:48:03.973 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 689174 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:04.231 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:06.767 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:06.767 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:48:06.767 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:48:06.767 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:48:07.025 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:48:09.546 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:48:14.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:48:14.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:48:14.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:48:14.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:14.814 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:14.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:14.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:48:14.815 00:48:14.815 --- 10.0.0.2 ping statistics --- 00:48:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:14.815 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:14.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:14.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:48:14.815 00:48:14.815 --- 10.0.0.1 ping statistics --- 00:48:14.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:14.815 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:48:14.815 net.core.busy_poll = 1 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:48:14.815 net.core.busy_read = 1 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=691948 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 691948 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 691948 ']' 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:14.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:14.815 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:14.815 [2024-12-09 05:43:08.917731] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:14.815 [2024-12-09 05:43:08.917821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:14.815 [2024-12-09 05:43:08.989969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:15.072 [2024-12-09 05:43:09.048527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:15.072 [2024-12-09 05:43:09.048576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:15.072 [2024-12-09 05:43:09.048599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:15.072 [2024-12-09 05:43:09.048611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:15.072 [2024-12-09 05:43:09.048621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:15.072 [2024-12-09 05:43:09.050116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:15.072 [2024-12-09 05:43:09.050183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:15.072 [2024-12-09 05:43:09.050248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:15.072 [2024-12-09 05:43:09.050251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.072 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 [2024-12-09 05:43:09.307017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 Malloc1 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:15.330 [2024-12-09 05:43:09.367626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=691988 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:48:15.330 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:48:17.223 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:48:17.223 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:17.223 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:17.223 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:17.223 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:48:17.223 "tick_rate": 2700000000, 00:48:17.223 "poll_groups": [ 00:48:17.223 { 00:48:17.223 "name": "nvmf_tgt_poll_group_000", 00:48:17.223 "admin_qpairs": 1, 00:48:17.223 "io_qpairs": 2, 00:48:17.223 "current_admin_qpairs": 1, 00:48:17.223 "current_io_qpairs": 2, 00:48:17.223 "pending_bdev_io": 0, 00:48:17.223 "completed_nvme_io": 24901, 00:48:17.223 "transports": [ 00:48:17.223 { 00:48:17.223 "trtype": "TCP" 00:48:17.223 } 00:48:17.223 ] 00:48:17.223 }, 00:48:17.223 { 00:48:17.223 "name": "nvmf_tgt_poll_group_001", 00:48:17.223 "admin_qpairs": 0, 00:48:17.223 "io_qpairs": 2, 00:48:17.223 "current_admin_qpairs": 0, 00:48:17.223 "current_io_qpairs": 2, 00:48:17.223 "pending_bdev_io": 0, 00:48:17.223 "completed_nvme_io": 25051, 00:48:17.223 "transports": [ 00:48:17.223 { 00:48:17.223 "trtype": "TCP" 00:48:17.223 } 00:48:17.223 ] 00:48:17.223 }, 00:48:17.223 { 00:48:17.223 "name": "nvmf_tgt_poll_group_002", 00:48:17.223 "admin_qpairs": 0, 00:48:17.223 "io_qpairs": 0, 00:48:17.223 "current_admin_qpairs": 0, 00:48:17.223 "current_io_qpairs": 0, 00:48:17.223 "pending_bdev_io": 0, 00:48:17.223 "completed_nvme_io": 0, 00:48:17.223 "transports": [ 00:48:17.223 { 00:48:17.224 "trtype": "TCP" 00:48:17.224 } 00:48:17.224 ] 00:48:17.224 }, 00:48:17.224 { 00:48:17.224 "name": "nvmf_tgt_poll_group_003", 00:48:17.224 "admin_qpairs": 0, 00:48:17.224 "io_qpairs": 0, 00:48:17.224 "current_admin_qpairs": 0, 00:48:17.224 "current_io_qpairs": 0, 00:48:17.224 "pending_bdev_io": 0, 00:48:17.224 "completed_nvme_io": 0, 00:48:17.224 "transports": [ 00:48:17.224 { 00:48:17.224 "trtype": "TCP" 00:48:17.224 } 00:48:17.224 ] 00:48:17.224 } 00:48:17.224 ] 00:48:17.224 }' 00:48:17.224 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:48:17.224 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:48:17.224 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:48:17.224 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:48:17.224 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 691988 00:48:27.184 Initializing NVMe Controllers 00:48:27.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:27.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:48:27.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:48:27.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:48:27.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:48:27.184 Initialization complete. Launching workers. 00:48:27.184 ======================================================== 00:48:27.185 Latency(us) 00:48:27.185 Device Information : IOPS MiB/s Average min max 00:48:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7977.90 31.16 8023.94 1904.70 55212.25 00:48:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7350.10 28.71 8709.75 1218.92 53860.12 00:48:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5722.90 22.36 11188.25 1850.94 54987.34 00:48:27.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5465.80 21.35 11742.89 1921.96 54621.66 00:48:27.185 ======================================================== 00:48:27.185 Total : 26516.70 103.58 9663.54 1218.92 55212.25 00:48:27.185 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:27.185 rmmod nvme_tcp 00:48:27.185 rmmod nvme_fabrics 00:48:27.185 rmmod nvme_keyring 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 691948 ']' 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 691948 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 691948 ']' 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 691948 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691948 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691948' 00:48:27.185 killing process with pid 691948 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 691948 00:48:27.185 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 691948 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:27.185 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:48:29.085 00:48:29.085 real 0m46.109s 00:48:29.085 user 2m40.313s 00:48:29.085 sys 0m9.503s 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:48:29.085 ************************************ 00:48:29.085 END TEST nvmf_perf_adq 00:48:29.085 ************************************ 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:48:29.085 ************************************ 00:48:29.085 START TEST nvmf_shutdown 00:48:29.085 ************************************ 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:48:29.085 * Looking for test storage... 00:48:29.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:48:29.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.085 --rc genhtml_branch_coverage=1 00:48:29.085 --rc genhtml_function_coverage=1 00:48:29.085 --rc genhtml_legend=1 00:48:29.085 --rc geninfo_all_blocks=1 00:48:29.085 --rc geninfo_unexecuted_blocks=1 00:48:29.085 00:48:29.085 ' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:48:29.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.085 --rc genhtml_branch_coverage=1 00:48:29.085 --rc genhtml_function_coverage=1 00:48:29.085 --rc genhtml_legend=1 00:48:29.085 --rc geninfo_all_blocks=1 00:48:29.085 --rc geninfo_unexecuted_blocks=1 00:48:29.085 00:48:29.085 ' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:48:29.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.085 --rc genhtml_branch_coverage=1 00:48:29.085 --rc genhtml_function_coverage=1 00:48:29.085 --rc genhtml_legend=1 00:48:29.085 --rc geninfo_all_blocks=1 00:48:29.085 --rc geninfo_unexecuted_blocks=1 00:48:29.085 00:48:29.085 ' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:48:29.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:29.085 --rc genhtml_branch_coverage=1 00:48:29.085 --rc genhtml_function_coverage=1 00:48:29.085 --rc genhtml_legend=1 00:48:29.085 --rc geninfo_all_blocks=1 00:48:29.085 --rc geninfo_unexecuted_blocks=1 00:48:29.085 00:48:29.085 ' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:29.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:48:29.085 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:48:29.086 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:48:29.086 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:48:29.086 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:29.086 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:29.343 ************************************ 00:48:29.343 START TEST nvmf_shutdown_tc1 00:48:29.343 ************************************ 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:48:29.343 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:48:31.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:48:31.870 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:48:31.871 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:48:31.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:48:31.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:31.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:31.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:48:31.871 00:48:31.871 --- 10.0.0.2 ping statistics --- 00:48:31.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:31.871 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:31.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:31.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:48:31.871 00:48:31.871 --- 10.0.0.1 ping statistics --- 00:48:31.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:31.871 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=695363 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 695363 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 695363 ']' 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:31.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:31.871 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.871 [2024-12-09 05:43:25.795443] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:31.872 [2024-12-09 05:43:25.795526] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:31.872 [2024-12-09 05:43:25.865422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:31.872 [2024-12-09 05:43:25.921501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:31.872 [2024-12-09 05:43:25.921562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:31.872 [2024-12-09 05:43:25.921584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:31.872 [2024-12-09 05:43:25.921595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:31.872 [2024-12-09 05:43:25.921604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:31.872 [2024-12-09 05:43:25.923217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:31.872 [2024-12-09 05:43:25.923333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:31.872 [2024-12-09 05:43:25.923418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:48:31.872 [2024-12-09 05:43:25.923422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.872 [2024-12-09 05:43:26.062503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:31.872 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:32.131 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:32.131 Malloc1 00:48:32.131 [2024-12-09 05:43:26.155282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:32.131 Malloc2 00:48:32.131 Malloc3 00:48:32.131 Malloc4 00:48:32.131 Malloc5 00:48:32.387 Malloc6 00:48:32.387 Malloc7 00:48:32.387 Malloc8 00:48:32.387 Malloc9 00:48:32.387 Malloc10 00:48:32.387 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:32.387 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:48:32.387 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:32.387 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=695468 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 695468 /var/tmp/bdevperf.sock 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 695468 ']' 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:32.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.645 { 00:48:32.645 "params": { 00:48:32.645 "name": "Nvme$subsystem", 00:48:32.645 "trtype": "$TEST_TRANSPORT", 00:48:32.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.645 "adrfam": "ipv4", 00:48:32.645 "trsvcid": "$NVMF_PORT", 00:48:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.645 "hdgst": ${hdgst:-false}, 00:48:32.645 "ddgst": ${ddgst:-false} 00:48:32.645 }, 00:48:32.645 "method": "bdev_nvme_attach_controller" 00:48:32.645 } 00:48:32.645 EOF 00:48:32.645 )") 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.645 { 00:48:32.645 "params": { 00:48:32.645 "name": "Nvme$subsystem", 00:48:32.645 "trtype": "$TEST_TRANSPORT", 00:48:32.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.645 "adrfam": "ipv4", 00:48:32.645 "trsvcid": "$NVMF_PORT", 00:48:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.645 "hdgst": ${hdgst:-false}, 00:48:32.645 "ddgst": ${ddgst:-false} 00:48:32.645 }, 00:48:32.645 "method": "bdev_nvme_attach_controller" 00:48:32.645 } 00:48:32.645 EOF 00:48:32.645 )") 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.645 { 00:48:32.645 "params": { 00:48:32.645 "name": "Nvme$subsystem", 00:48:32.645 "trtype": "$TEST_TRANSPORT", 00:48:32.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.645 "adrfam": "ipv4", 00:48:32.645 "trsvcid": "$NVMF_PORT", 00:48:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.645 "hdgst": ${hdgst:-false}, 00:48:32.645 "ddgst": ${ddgst:-false} 00:48:32.645 }, 00:48:32.645 "method": "bdev_nvme_attach_controller" 00:48:32.645 } 00:48:32.645 EOF 00:48:32.645 )") 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.645 { 00:48:32.645 "params": { 00:48:32.645 "name": "Nvme$subsystem", 00:48:32.645 "trtype": "$TEST_TRANSPORT", 00:48:32.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.645 "adrfam": "ipv4", 00:48:32.645 "trsvcid": "$NVMF_PORT", 00:48:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.645 "hdgst": ${hdgst:-false}, 00:48:32.645 "ddgst": ${ddgst:-false} 00:48:32.645 }, 00:48:32.645 "method": "bdev_nvme_attach_controller" 00:48:32.645 } 00:48:32.645 EOF 00:48:32.645 )") 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.645 { 00:48:32.645 "params": { 00:48:32.645 "name": "Nvme$subsystem", 00:48:32.645 "trtype": "$TEST_TRANSPORT", 00:48:32.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.645 "adrfam": "ipv4", 00:48:32.645 "trsvcid": "$NVMF_PORT", 00:48:32.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.645 "hdgst": ${hdgst:-false}, 00:48:32.645 "ddgst": ${ddgst:-false} 00:48:32.645 }, 00:48:32.645 "method": "bdev_nvme_attach_controller" 00:48:32.645 } 00:48:32.645 EOF 00:48:32.645 )") 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.645 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.646 { 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme$subsystem", 00:48:32.646 "trtype": "$TEST_TRANSPORT", 00:48:32.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "$NVMF_PORT", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.646 "hdgst": ${hdgst:-false}, 00:48:32.646 "ddgst": ${ddgst:-false} 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 } 00:48:32.646 EOF 00:48:32.646 )") 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.646 { 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme$subsystem", 00:48:32.646 "trtype": "$TEST_TRANSPORT", 00:48:32.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "$NVMF_PORT", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.646 "hdgst": ${hdgst:-false}, 00:48:32.646 "ddgst": ${ddgst:-false} 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 } 00:48:32.646 EOF 00:48:32.646 )") 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.646 { 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme$subsystem", 00:48:32.646 "trtype": "$TEST_TRANSPORT", 00:48:32.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "$NVMF_PORT", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.646 "hdgst": ${hdgst:-false}, 00:48:32.646 "ddgst": ${ddgst:-false} 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 } 00:48:32.646 EOF 00:48:32.646 )") 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.646 { 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme$subsystem", 00:48:32.646 "trtype": "$TEST_TRANSPORT", 00:48:32.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "$NVMF_PORT", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.646 "hdgst": ${hdgst:-false}, 00:48:32.646 "ddgst": ${ddgst:-false} 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 } 00:48:32.646 EOF 00:48:32.646 )") 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:32.646 { 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme$subsystem", 00:48:32.646 "trtype": "$TEST_TRANSPORT", 00:48:32.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "$NVMF_PORT", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:32.646 "hdgst": ${hdgst:-false}, 00:48:32.646 "ddgst": ${ddgst:-false} 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 } 00:48:32.646 EOF 00:48:32.646 )") 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:48:32.646 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme1", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme2", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme3", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme4", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme5", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme6", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme7", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme8", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme9", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 },{ 00:48:32.646 "params": { 00:48:32.646 "name": "Nvme10", 00:48:32.646 "trtype": "tcp", 00:48:32.646 "traddr": "10.0.0.2", 00:48:32.646 "adrfam": "ipv4", 00:48:32.646 "trsvcid": "4420", 00:48:32.646 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:48:32.646 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:48:32.646 "hdgst": false, 00:48:32.646 "ddgst": false 00:48:32.646 }, 00:48:32.646 "method": "bdev_nvme_attach_controller" 00:48:32.646 }' 00:48:32.646 [2024-12-09 05:43:26.683030] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:32.647 [2024-12-09 05:43:26.683114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:48:32.647 [2024-12-09 05:43:26.754700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:32.647 [2024-12-09 05:43:26.814368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 695468 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:48:34.545 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:48:35.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 695468 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 695363 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.919 { 00:48:35.919 "params": { 00:48:35.919 "name": "Nvme$subsystem", 00:48:35.919 "trtype": "$TEST_TRANSPORT", 00:48:35.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.919 "adrfam": "ipv4", 00:48:35.919 "trsvcid": "$NVMF_PORT", 00:48:35.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.919 "hdgst": ${hdgst:-false}, 00:48:35.919 "ddgst": ${ddgst:-false} 00:48:35.919 }, 00:48:35.919 "method": "bdev_nvme_attach_controller" 00:48:35.919 } 00:48:35.919 EOF 00:48:35.919 )") 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.919 { 00:48:35.919 "params": { 00:48:35.919 "name": "Nvme$subsystem", 00:48:35.919 "trtype": "$TEST_TRANSPORT", 00:48:35.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.919 "adrfam": "ipv4", 00:48:35.919 "trsvcid": "$NVMF_PORT", 00:48:35.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.919 "hdgst": ${hdgst:-false}, 00:48:35.919 "ddgst": ${ddgst:-false} 00:48:35.919 }, 00:48:35.919 "method": "bdev_nvme_attach_controller" 00:48:35.919 } 00:48:35.919 EOF 00:48:35.919 )") 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.919 { 00:48:35.919 "params": { 00:48:35.919 "name": "Nvme$subsystem", 00:48:35.919 "trtype": "$TEST_TRANSPORT", 00:48:35.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.919 "adrfam": "ipv4", 00:48:35.919 "trsvcid": "$NVMF_PORT", 00:48:35.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.919 "hdgst": ${hdgst:-false}, 00:48:35.919 "ddgst": ${ddgst:-false} 00:48:35.919 }, 00:48:35.919 "method": "bdev_nvme_attach_controller" 00:48:35.919 } 00:48:35.919 EOF 00:48:35.919 )") 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.919 { 00:48:35.919 "params": { 00:48:35.919 "name": "Nvme$subsystem", 00:48:35.919 "trtype": "$TEST_TRANSPORT", 00:48:35.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.919 "adrfam": "ipv4", 00:48:35.919 "trsvcid": "$NVMF_PORT", 00:48:35.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.919 "hdgst": ${hdgst:-false}, 00:48:35.919 "ddgst": ${ddgst:-false} 00:48:35.919 }, 00:48:35.919 "method": "bdev_nvme_attach_controller" 00:48:35.919 } 00:48:35.919 EOF 00:48:35.919 )") 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.919 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:35.920 { 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme$subsystem", 00:48:35.920 "trtype": "$TEST_TRANSPORT", 00:48:35.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "$NVMF_PORT", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:35.920 "hdgst": ${hdgst:-false}, 00:48:35.920 "ddgst": ${ddgst:-false} 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 } 00:48:35.920 EOF 00:48:35.920 )") 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:48:35.920 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme1", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme2", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme3", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme4", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme5", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme6", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme7", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme8", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.920 },{ 00:48:35.920 "params": { 00:48:35.920 "name": "Nvme9", 00:48:35.920 "trtype": "tcp", 00:48:35.920 "traddr": "10.0.0.2", 00:48:35.920 "adrfam": "ipv4", 00:48:35.920 "trsvcid": "4420", 00:48:35.920 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:48:35.920 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:48:35.920 "hdgst": false, 00:48:35.920 "ddgst": false 00:48:35.920 }, 00:48:35.920 "method": "bdev_nvme_attach_controller" 00:48:35.921 },{ 00:48:35.921 "params": { 00:48:35.921 "name": "Nvme10", 00:48:35.921 "trtype": "tcp", 00:48:35.921 "traddr": "10.0.0.2", 00:48:35.921 "adrfam": "ipv4", 00:48:35.921 "trsvcid": "4420", 00:48:35.921 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:48:35.921 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:48:35.921 "hdgst": false, 00:48:35.921 "ddgst": false 00:48:35.921 }, 00:48:35.921 "method": "bdev_nvme_attach_controller" 00:48:35.921 }' 00:48:35.921 [2024-12-09 05:43:29.791007] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:35.921 [2024-12-09 05:43:29.791094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695885 ] 00:48:35.921 [2024-12-09 05:43:29.863280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:35.921 [2024-12-09 05:43:29.923011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:37.819 Running I/O for 1 seconds... 00:48:38.640 1799.00 IOPS, 112.44 MiB/s 00:48:38.640 Latency(us) 00:48:38.640 [2024-12-09T04:43:32.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:38.641 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme1n1 : 1.11 229.77 14.36 0.00 0.00 275899.73 25631.86 257872.02 00:48:38.641 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme2n1 : 1.11 230.36 14.40 0.00 0.00 270345.48 18544.26 257872.02 00:48:38.641 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme3n1 : 1.10 239.23 14.95 0.00 0.00 254887.83 5048.70 262532.36 00:48:38.641 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme4n1 : 1.10 232.27 14.52 0.00 0.00 259334.07 18641.35 250104.79 00:48:38.641 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme5n1 : 1.15 223.26 13.95 0.00 0.00 266293.10 27573.67 268746.15 00:48:38.641 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme6n1 : 1.15 222.77 13.92 0.00 0.00 262504.49 20874.43 257872.02 00:48:38.641 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme7n1 : 1.16 221.14 13.82 0.00 0.00 260232.91 20388.98 260978.92 00:48:38.641 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme8n1 : 1.17 273.18 17.07 0.00 0.00 207218.46 12718.84 222142.77 00:48:38.641 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme9n1 : 1.16 220.12 13.76 0.00 0.00 252871.87 20777.34 274959.93 00:48:38.641 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:38.641 Verification LBA range: start 0x0 length 0x400 00:48:38.641 Nvme10n1 : 1.17 219.52 13.72 0.00 0.00 249282.37 18738.44 284280.60 00:48:38.641 [2024-12-09T04:43:32.866Z] =================================================================================================================== 00:48:38.641 [2024-12-09T04:43:32.866Z] Total : 2311.62 144.48 0.00 0.00 254700.49 5048.70 284280.60 00:48:38.898 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:38.899 rmmod nvme_tcp 00:48:38.899 rmmod nvme_fabrics 00:48:38.899 rmmod nvme_keyring 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 695363 ']' 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 695363 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 695363 ']' 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 695363 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695363 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695363' 00:48:38.899 killing process with pid 695363 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 695363 00:48:38.899 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 695363 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:39.463 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:39.464 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:41.994 00:48:41.994 real 0m12.378s 00:48:41.994 user 0m36.031s 00:48:41.994 sys 0m3.352s 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:48:41.994 ************************************ 00:48:41.994 END TEST nvmf_shutdown_tc1 00:48:41.994 ************************************ 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:41.994 ************************************ 00:48:41.994 START TEST nvmf_shutdown_tc2 00:48:41.994 ************************************ 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:48:41.994 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:48:41.994 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:41.994 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:48:41.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:48:41.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:41.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:41.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:48:41.995 00:48:41.995 --- 10.0.0.2 ping statistics --- 00:48:41.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:41.995 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:41.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:41.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:48:41.995 00:48:41.995 --- 10.0.0.1 ping statistics --- 00:48:41.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:41.995 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=696731 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 696731 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 696731 ']' 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:41.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:41.995 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:41.995 [2024-12-09 05:43:35.988197] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:41.995 [2024-12-09 05:43:35.988327] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:41.995 [2024-12-09 05:43:36.060688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:41.995 [2024-12-09 05:43:36.118642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:41.995 [2024-12-09 05:43:36.118711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:41.995 [2024-12-09 05:43:36.118725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:41.995 [2024-12-09 05:43:36.118736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:41.995 [2024-12-09 05:43:36.118745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:41.995 [2024-12-09 05:43:36.120287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:41.995 [2024-12-09 05:43:36.120409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:41.995 [2024-12-09 05:43:36.120475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:48:41.995 [2024-12-09 05:43:36.120479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.254 [2024-12-09 05:43:36.280238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:42.254 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.254 Malloc1 00:48:42.254 [2024-12-09 05:43:36.381744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:42.254 Malloc2 00:48:42.254 Malloc3 00:48:42.513 Malloc4 00:48:42.513 Malloc5 00:48:42.513 Malloc6 00:48:42.513 Malloc7 00:48:42.513 Malloc8 00:48:42.771 Malloc9 00:48:42.771 Malloc10 00:48:42.771 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:42.771 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:48:42.771 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=696841 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 696841 /var/tmp/bdevperf.sock 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 696841 ']' 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:42.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.772 { 00:48:42.772 "params": { 00:48:42.772 "name": "Nvme$subsystem", 00:48:42.772 "trtype": "$TEST_TRANSPORT", 00:48:42.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.772 "adrfam": "ipv4", 00:48:42.772 "trsvcid": "$NVMF_PORT", 00:48:42.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.772 "hdgst": ${hdgst:-false}, 00:48:42.772 "ddgst": ${ddgst:-false} 00:48:42.772 }, 00:48:42.772 "method": "bdev_nvme_attach_controller" 00:48:42.772 } 00:48:42.772 EOF 00:48:42.772 )") 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:42.773 { 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme$subsystem", 00:48:42.773 "trtype": "$TEST_TRANSPORT", 00:48:42.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "$NVMF_PORT", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:42.773 "hdgst": ${hdgst:-false}, 00:48:42.773 "ddgst": ${ddgst:-false} 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 } 00:48:42.773 EOF 00:48:42.773 )") 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:48:42.773 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme1", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme2", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme3", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme4", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme5", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme6", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme7", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme8", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme9", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 },{ 00:48:42.773 "params": { 00:48:42.773 "name": "Nvme10", 00:48:42.773 "trtype": "tcp", 00:48:42.773 "traddr": "10.0.0.2", 00:48:42.773 "adrfam": "ipv4", 00:48:42.773 "trsvcid": "4420", 00:48:42.773 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:48:42.773 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:48:42.773 "hdgst": false, 00:48:42.773 "ddgst": false 00:48:42.773 }, 00:48:42.773 "method": "bdev_nvme_attach_controller" 00:48:42.773 }' 00:48:42.773 [2024-12-09 05:43:36.905177] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:42.773 [2024-12-09 05:43:36.905262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696841 ] 00:48:42.773 [2024-12-09 05:43:36.976689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:43.031 [2024-12-09 05:43:37.037218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:44.403 Running I/O for 10 seconds... 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:48:44.988 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:48:45.286 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:48:45.286 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:48:45.286 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:48:45.286 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:48:45.286 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 696841 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 696841 ']' 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 696841 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696841 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696841' 00:48:45.287 killing process with pid 696841 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 696841 00:48:45.287 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 696841 00:48:45.287 Received shutdown signal, test time was about 0.845498 seconds 00:48:45.287 00:48:45.287 Latency(us) 00:48:45.287 [2024-12-09T04:43:39.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:45.287 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme1n1 : 0.83 230.93 14.43 0.00 0.00 273420.14 29515.47 248551.35 00:48:45.287 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme2n1 : 0.81 237.68 14.85 0.00 0.00 257602.75 18641.35 250104.79 00:48:45.287 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme3n1 : 0.80 240.01 15.00 0.00 0.00 250526.78 17864.63 257872.02 00:48:45.287 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme4n1 : 0.81 238.16 14.89 0.00 0.00 246417.38 31068.92 240784.12 00:48:45.287 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme5n1 : 0.82 233.94 14.62 0.00 0.00 243673.57 18932.62 251658.24 00:48:45.287 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme6n1 : 0.82 234.39 14.65 0.00 0.00 238972.27 30486.38 231463.44 00:48:45.287 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme7n1 : 0.83 232.37 14.52 0.00 0.00 235537.70 16893.72 250104.79 00:48:45.287 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme8n1 : 0.84 229.38 14.34 0.00 0.00 233275.80 18155.90 260978.92 00:48:45.287 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme9n1 : 0.84 227.31 14.21 0.00 0.00 230067.01 21456.97 288940.94 00:48:45.287 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:45.287 Verification LBA range: start 0x0 length 0x400 00:48:45.287 Nvme10n1 : 0.84 228.27 14.27 0.00 0.00 222993.07 20291.89 262532.36 00:48:45.287 [2024-12-09T04:43:39.512Z] =================================================================================================================== 00:48:45.287 [2024-12-09T04:43:39.512Z] Total : 2332.43 145.78 0.00 0.00 243248.65 16893.72 288940.94 00:48:45.587 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 696731 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:46.521 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:46.521 rmmod nvme_tcp 00:48:46.521 rmmod nvme_fabrics 00:48:46.521 rmmod nvme_keyring 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 696731 ']' 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 696731 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 696731 ']' 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 696731 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696731 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696731' 00:48:46.779 killing process with pid 696731 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 696731 00:48:46.779 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 696731 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:47.347 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:49.256 00:48:49.256 real 0m7.601s 00:48:49.256 user 0m22.996s 00:48:49.256 sys 0m1.413s 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:48:49.256 ************************************ 00:48:49.256 END TEST nvmf_shutdown_tc2 00:48:49.256 ************************************ 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:49.256 ************************************ 00:48:49.256 START TEST nvmf_shutdown_tc3 00:48:49.256 ************************************ 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:49.256 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:48:49.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:48:49.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:48:49.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:48:49.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:49.257 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:49.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:49.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:48:49.516 00:48:49.516 --- 10.0.0.2 ping statistics --- 00:48:49.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:49.516 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:49.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:49.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:48:49.516 00:48:49.516 --- 10.0.0.1 ping statistics --- 00:48:49.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:49.516 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=697757 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 697757 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 697757 ']' 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:48:49.516 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:49.517 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:49.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:49.517 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:49.517 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.517 [2024-12-09 05:43:43.639590] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:49.517 [2024-12-09 05:43:43.639685] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:49.517 [2024-12-09 05:43:43.721969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:49.774 [2024-12-09 05:43:43.782084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:49.774 [2024-12-09 05:43:43.782143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:49.774 [2024-12-09 05:43:43.782157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:49.774 [2024-12-09 05:43:43.782169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:49.774 [2024-12-09 05:43:43.782179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:49.774 [2024-12-09 05:43:43.783705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:49.774 [2024-12-09 05:43:43.783767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:49.774 [2024-12-09 05:43:43.783833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:48:49.774 [2024-12-09 05:43:43.783837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.774 [2024-12-09 05:43:43.927112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:49.774 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:49.774 Malloc1 00:48:50.032 [2024-12-09 05:43:44.012313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:50.032 Malloc2 00:48:50.032 Malloc3 00:48:50.032 Malloc4 00:48:50.032 Malloc5 00:48:50.032 Malloc6 00:48:50.289 Malloc7 00:48:50.289 Malloc8 00:48:50.289 Malloc9 00:48:50.289 Malloc10 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=697936 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 697936 /var/tmp/bdevperf.sock 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 697936 ']' 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:48:50.289 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:48:50.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.290 "hdgst": ${hdgst:-false}, 00:48:50.290 "ddgst": ${ddgst:-false} 00:48:50.290 }, 00:48:50.290 "method": "bdev_nvme_attach_controller" 00:48:50.290 } 00:48:50.290 EOF 00:48:50.290 )") 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.290 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.290 { 00:48:50.290 "params": { 00:48:50.290 "name": "Nvme$subsystem", 00:48:50.290 "trtype": "$TEST_TRANSPORT", 00:48:50.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.290 "adrfam": "ipv4", 00:48:50.290 "trsvcid": "$NVMF_PORT", 00:48:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.291 "hdgst": ${hdgst:-false}, 00:48:50.291 "ddgst": ${ddgst:-false} 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 } 00:48:50.291 EOF 00:48:50.291 )") 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.291 { 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme$subsystem", 00:48:50.291 "trtype": "$TEST_TRANSPORT", 00:48:50.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "$NVMF_PORT", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.291 "hdgst": ${hdgst:-false}, 00:48:50.291 "ddgst": ${ddgst:-false} 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 } 00:48:50.291 EOF 00:48:50.291 )") 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.291 { 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme$subsystem", 00:48:50.291 "trtype": "$TEST_TRANSPORT", 00:48:50.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "$NVMF_PORT", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.291 "hdgst": ${hdgst:-false}, 00:48:50.291 "ddgst": ${ddgst:-false} 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 } 00:48:50.291 EOF 00:48:50.291 )") 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:48:50.291 { 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme$subsystem", 00:48:50.291 "trtype": "$TEST_TRANSPORT", 00:48:50.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "$NVMF_PORT", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:50.291 "hdgst": ${hdgst:-false}, 00:48:50.291 "ddgst": ${ddgst:-false} 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 } 00:48:50.291 EOF 00:48:50.291 )") 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:48:50.291 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme1", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme2", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme3", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme4", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme5", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme6", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:48:50.291 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:48:50.291 "hdgst": false, 00:48:50.291 "ddgst": false 00:48:50.291 }, 00:48:50.291 "method": "bdev_nvme_attach_controller" 00:48:50.291 },{ 00:48:50.291 "params": { 00:48:50.291 "name": "Nvme7", 00:48:50.291 "trtype": "tcp", 00:48:50.291 "traddr": "10.0.0.2", 00:48:50.291 "adrfam": "ipv4", 00:48:50.291 "trsvcid": "4420", 00:48:50.291 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:48:50.292 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:48:50.292 "hdgst": false, 00:48:50.292 "ddgst": false 00:48:50.292 }, 00:48:50.292 "method": "bdev_nvme_attach_controller" 00:48:50.292 },{ 00:48:50.292 "params": { 00:48:50.292 "name": "Nvme8", 00:48:50.292 "trtype": "tcp", 00:48:50.292 "traddr": "10.0.0.2", 00:48:50.292 "adrfam": "ipv4", 00:48:50.292 "trsvcid": "4420", 00:48:50.292 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:48:50.292 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:48:50.292 "hdgst": false, 00:48:50.292 "ddgst": false 00:48:50.292 }, 00:48:50.292 "method": "bdev_nvme_attach_controller" 00:48:50.292 },{ 00:48:50.292 "params": { 00:48:50.292 "name": "Nvme9", 00:48:50.292 "trtype": "tcp", 00:48:50.292 "traddr": "10.0.0.2", 00:48:50.292 "adrfam": "ipv4", 00:48:50.292 "trsvcid": "4420", 00:48:50.292 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:48:50.292 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:48:50.292 "hdgst": false, 00:48:50.292 "ddgst": false 00:48:50.292 }, 00:48:50.292 "method": "bdev_nvme_attach_controller" 00:48:50.292 },{ 00:48:50.292 "params": { 00:48:50.292 "name": "Nvme10", 00:48:50.292 "trtype": "tcp", 00:48:50.292 "traddr": "10.0.0.2", 00:48:50.292 "adrfam": "ipv4", 00:48:50.292 "trsvcid": "4420", 00:48:50.292 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:48:50.292 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:48:50.292 "hdgst": false, 00:48:50.292 "ddgst": false 00:48:50.292 }, 00:48:50.292 "method": "bdev_nvme_attach_controller" 00:48:50.292 }' 00:48:50.549 [2024-12-09 05:43:44.513643] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:50.549 [2024-12-09 05:43:44.513751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697936 ] 00:48:50.549 [2024-12-09 05:43:44.586133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:50.549 [2024-12-09 05:43:44.645437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:52.441 Running I/O for 10 seconds... 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:48:52.441 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:48:52.698 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 697757 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 697757 ']' 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 697757 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:52.955 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697757 00:48:53.227 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:48:53.227 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:48:53.227 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697757' 00:48:53.227 killing process with pid 697757 00:48:53.227 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 697757 00:48:53.227 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 697757 00:48:53.227 [2024-12-09 05:43:47.197672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.197993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.227 [2024-12-09 05:43:47.198280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.198567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9ace0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.200997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.228 [2024-12-09 05:43:47.201415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.201488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe42c0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.203167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b1b0 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.204992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.205007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.205019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.205040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.205051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9b680 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.229 [2024-12-09 05:43:47.206696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.206989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.207380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9bb70 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.208998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.230 [2024-12-09 05:43:47.209218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.209625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c510 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.210930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.210957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.210972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.210985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.210998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.231 [2024-12-09 05:43:47.211629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.211820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c690 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.212819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.212844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.212857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.212870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.212883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cb60 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.232 [2024-12-09 05:43:47.213930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.213943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.213955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.213967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.213979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.214066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe3dd0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.219666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19556f0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.219879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.219977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.219989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf3e0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce270 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf200 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfc0 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961270 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961700 is same with the state(6) to be set 00:48:53.233 [2024-12-09 05:43:47.220953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.220974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.220989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.221002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.221016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.233 [2024-12-09 05:43:47.221029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.233 [2024-12-09 05:43:47.221044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce450 is same with the state(6) to be set 00:48:53.234 [2024-12-09 05:43:47.221126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c9e0 is same with the state(6) to be set 00:48:53.234 [2024-12-09 05:43:47.221294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:48:53.234 [2024-12-09 05:43:47.221415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9110 is same with the state(6) to be set 00:48:53.234 [2024-12-09 05:43:47.221614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.221984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.221999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.234 [2024-12-09 05:43:47.222545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.234 [2024-12-09 05:43:47.222568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.222977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.222990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.223968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.223992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.224014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.224030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.224046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.224060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.224076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.224091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.235 [2024-12-09 05:43:47.224106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.235 [2024-12-09 05:43:47.224120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.224973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.224990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.236 [2024-12-09 05:43:47.225208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.236 [2024-12-09 05:43:47.225222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.225982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.237 [2024-12-09 05:43:47.225996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.237 [2024-12-09 05:43:47.226015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67e20 is same with the state(6) to be set 00:48:53.237 [2024-12-09 05:43:47.230703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:48:53.237 [2024-12-09 05:43:47.230773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:48:53.237 [2024-12-09 05:43:47.230807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8c9e0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19556f0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf3e0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce270 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf200 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8cfc0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.230989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961270 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.231018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961700 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.231046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce450 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.231079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9110 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.232674] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.232765] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.233134] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.233217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.233249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:48:53.237 [2024-12-09 05:43:47.233418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.237 [2024-12-09 05:43:47.233464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19556f0 with addr=10.0.0.2, port=4420 00:48:53.237 [2024-12-09 05:43:47.233482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19556f0 is same with the state(6) to be set 00:48:53.237 [2024-12-09 05:43:47.233589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.237 [2024-12-09 05:43:47.233616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8c9e0 with addr=10.0.0.2, port=4420 00:48:53.237 [2024-12-09 05:43:47.233634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c9e0 is same with the state(6) to be set 00:48:53.237 [2024-12-09 05:43:47.233707] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.233802] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.233872] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:48:53.237 [2024-12-09 05:43:47.234068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.237 [2024-12-09 05:43:47.234098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dce450 with addr=10.0.0.2, port=4420 00:48:53.237 [2024-12-09 05:43:47.234115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce450 is same with the state(6) to be set 00:48:53.237 [2024-12-09 05:43:47.234135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19556f0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.234156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8c9e0 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.234303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce450 (9): Bad file descriptor 00:48:53.237 [2024-12-09 05:43:47.234344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:48:53.237 [2024-12-09 05:43:47.234359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:48:53.237 [2024-12-09 05:43:47.234375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:48:53.238 [2024-12-09 05:43:47.234391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:48:53.238 [2024-12-09 05:43:47.234407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:48:53.238 [2024-12-09 05:43:47.234420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:48:53.238 [2024-12-09 05:43:47.234433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:48:53.238 [2024-12-09 05:43:47.234446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:48:53.238 [2024-12-09 05:43:47.234499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:48:53.238 [2024-12-09 05:43:47.234516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:48:53.238 [2024-12-09 05:43:47.234529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:48:53.238 [2024-12-09 05:43:47.234542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:48:53.238 [2024-12-09 05:43:47.240937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.240998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.241981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.241998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.242012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.242042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.242058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.242072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.238 [2024-12-09 05:43:47.242088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.238 [2024-12-09 05:43:47.242102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.242977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.242990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.243005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b654b0 is same with the state(6) to be set 00:48:53.239 [2024-12-09 05:43:47.244320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.239 [2024-12-09 05:43:47.244611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.239 [2024-12-09 05:43:47.244625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.244975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.244989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.240 [2024-12-09 05:43:47.245747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.240 [2024-12-09 05:43:47.245763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.245979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.245993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.246331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.246345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b67570 is same with the state(6) to be set 00:48:53.241 [2024-12-09 05:43:47.247599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.247975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.247989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.248010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.248025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.248041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.248055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.248071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.248085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.241 [2024-12-09 05:43:47.248116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.241 [2024-12-09 05:43:47.248131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.248970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.248986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.242 [2024-12-09 05:43:47.249385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.242 [2024-12-09 05:43:47.249401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.249645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.249659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d66c80 is same with the state(6) to be set 00:48:53.243 [2024-12-09 05:43:47.250906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.250930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.250951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.250967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.250983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.250997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.243 [2024-12-09 05:43:47.251821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.243 [2024-12-09 05:43:47.251835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.251850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.251865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.251880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.251894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.251909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.251925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.251942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.251956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.251976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.251991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.252891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.252906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d690e0 is same with the state(6) to be set 00:48:53.244 [2024-12-09 05:43:47.254165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.254189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.254211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.254227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.244 [2024-12-09 05:43:47.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.244 [2024-12-09 05:43:47.254257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.254974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.254990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.245 [2024-12-09 05:43:47.255390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.245 [2024-12-09 05:43:47.255406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.255985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.256177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.256191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e84090 is same with the state(6) to be set 00:48:53.246 [2024-12-09 05:43:47.257439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.246 [2024-12-09 05:43:47.257854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.246 [2024-12-09 05:43:47.257870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.257884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.257899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.257913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.257943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.257958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.257972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.257988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.258961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.266040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.266088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.247 [2024-12-09 05:43:47.266105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.247 [2024-12-09 05:43:47.266118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.266487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e86610 is same with the state(6) to be set 00:48:53.248 [2024-12-09 05:43:47.267857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.267881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.267908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.267939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.267970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.267990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.248 [2024-12-09 05:43:47.268699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.248 [2024-12-09 05:43:47.268715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.268986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:48:53.249 [2024-12-09 05:43:47.269873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:48:53.249 [2024-12-09 05:43:47.269888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7bb00 is same with the state(6) to be set 00:48:53.249 [2024-12-09 05:43:47.271497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.271534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.271557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.271584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.271716] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.271744] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.271764] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.271880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.271908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:48:53.250 task offset: 24576 on job bdev=Nvme2n1 fails 00:48:53.250 00:48:53.250 Latency(us) 00:48:53.250 [2024-12-09T04:43:47.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:53.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme1n1 ended in about 0.96 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme1n1 : 0.96 203.18 12.70 66.34 0.00 234922.57 12184.84 259425.47 00:48:53.250 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme2n1 ended in about 0.95 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme2n1 : 0.95 202.48 12.65 67.49 0.00 229943.56 19223.89 239230.67 00:48:53.250 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme3n1 ended in about 0.97 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme3n1 : 0.97 132.24 8.26 66.12 0.00 307105.94 24563.86 260978.92 00:48:53.250 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme4n1 ended in about 0.97 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme4n1 : 0.97 197.68 12.36 65.89 0.00 226575.17 30292.20 248551.35 00:48:53.250 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme5n1 ended in about 0.95 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme5n1 : 0.95 202.23 12.64 67.41 0.00 216546.80 9029.40 267192.70 00:48:53.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme6n1 ended in about 0.97 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme6n1 : 0.97 131.35 8.21 65.68 0.00 291290.07 21845.33 299815.06 00:48:53.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme7n1 ended in about 0.98 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme7n1 : 0.98 196.36 12.27 65.45 0.00 214762.76 30874.74 253211.69 00:48:53.250 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme8n1 : 0.95 201.88 12.62 0.00 0.00 271582.50 18641.35 264085.81 00:48:53.250 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme9n1 ended in about 0.99 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme9n1 : 0.99 129.54 8.10 64.77 0.00 278035.47 20486.07 274959.93 00:48:53.250 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:48:53.250 Job: Nvme10n1 ended in about 0.99 seconds with error 00:48:53.250 Verification LBA range: start 0x0 length 0x400 00:48:53.250 Nvme10n1 : 0.99 133.14 8.32 64.55 0.00 267677.66 19709.35 254765.13 00:48:53.250 [2024-12-09T04:43:47.475Z] =================================================================================================================== 00:48:53.250 [2024-12-09T04:43:47.475Z] Total : 1730.09 108.13 593.72 0.00 249665.22 9029.40 299815.06 00:48:53.250 [2024-12-09 05:43:47.298762] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:53.250 [2024-12-09 05:43:47.298860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.299181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.299218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1961700 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.299238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961700 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.299372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1961270 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.299389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961270 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.299480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8cfc0 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.299524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8cfc0 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.299625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.299652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9110 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.299669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9110 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.301681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.301712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.301873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.301902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dce270 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.301920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce270 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.302000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.302026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbf200 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.302043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf200 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.302126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.302152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dbf3e0 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.302168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf3e0 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.302193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961700 (9): Bad file descriptor 00:48:53.250 [2024-12-09 05:43:47.302216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961270 (9): Bad file descriptor 00:48:53.250 [2024-12-09 05:43:47.302235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8cfc0 (9): Bad file descriptor 00:48:53.250 [2024-12-09 05:43:47.302253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9110 (9): Bad file descriptor 00:48:53.250 [2024-12-09 05:43:47.302323] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.302361] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.302381] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.302402] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.302424] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:48:53.250 [2024-12-09 05:43:47.302507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:48:53.250 [2024-12-09 05:43:47.302653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.250 [2024-12-09 05:43:47.302682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8c9e0 with addr=10.0.0.2, port=4420 00:48:53.250 [2024-12-09 05:43:47.302698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c9e0 is same with the state(6) to be set 00:48:53.250 [2024-12-09 05:43:47.302774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.251 [2024-12-09 05:43:47.302801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19556f0 with addr=10.0.0.2, port=4420 00:48:53.251 [2024-12-09 05:43:47.302817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19556f0 is same with the state(6) to be set 00:48:53.251 [2024-12-09 05:43:47.302836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce270 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.302857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf200 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.302875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbf3e0 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.302892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.302905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.302921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.302938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.302955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.302968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.302980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.302993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:48:53.251 [2024-12-09 05:43:47.303335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dce450 with addr=10.0.0.2, port=4420 00:48:53.251 [2024-12-09 05:43:47.303351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dce450 is same with the state(6) to be set 00:48:53.251 [2024-12-09 05:43:47.303370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8c9e0 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.303390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19556f0 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.303406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dce450 (9): Bad file descriptor 00:48:53.251 [2024-12-09 05:43:47.303624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:48:53.251 [2024-12-09 05:43:47.303760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:48:53.251 [2024-12-09 05:43:47.303778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:48:53.251 [2024-12-09 05:43:47.303792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:48:53.251 [2024-12-09 05:43:47.303805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:48:53.818 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 697936 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 697936 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 697936 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:48:54.753 rmmod nvme_tcp 00:48:54.753 rmmod nvme_fabrics 00:48:54.753 rmmod nvme_keyring 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 697757 ']' 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 697757 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 697757 ']' 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 697757 00:48:54.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (697757) - No such process 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 697757 is not found' 00:48:54.753 Process with pid 697757 is not found 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:54.753 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:48:56.658 00:48:56.658 real 0m7.440s 00:48:56.658 user 0m18.305s 00:48:56.658 sys 0m1.441s 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:48:56.658 ************************************ 00:48:56.658 END TEST nvmf_shutdown_tc3 00:48:56.658 ************************************ 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:48:56.658 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:56.916 ************************************ 00:48:56.916 START TEST nvmf_shutdown_tc4 00:48:56.916 ************************************ 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:48:56.916 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:48:56.917 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:48:56.917 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:48:56.917 Found net devices under 0000:0a:00.0: cvl_0_0 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:48:56.917 Found net devices under 0000:0a:00.1: cvl_0_1 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:48:56.917 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:56.917 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:56.917 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:56.917 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:48:56.917 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:48:57.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:57.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:48:57.176 00:48:57.176 --- 10.0.0.2 ping statistics --- 00:48:57.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:57.176 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:57.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:57.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:48:57.176 00:48:57.176 --- 10.0.0.1 ping statistics --- 00:48:57.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:57.176 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=698841 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 698841 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 698841 ']' 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:57.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:48:57.176 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.176 [2024-12-09 05:43:51.252208] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:48:57.176 [2024-12-09 05:43:51.252305] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:57.176 [2024-12-09 05:43:51.323907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:57.176 [2024-12-09 05:43:51.378878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:57.176 [2024-12-09 05:43:51.378935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:57.176 [2024-12-09 05:43:51.378958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:57.176 [2024-12-09 05:43:51.378968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:57.176 [2024-12-09 05:43:51.378977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:57.176 [2024-12-09 05:43:51.380365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:48:57.176 [2024-12-09 05:43:51.380423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:48:57.176 [2024-12-09 05:43:51.380491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:48:57.176 [2024-12-09 05:43:51.380494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.434 [2024-12-09 05:43:51.532012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:48:57.434 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.434 Malloc1 00:48:57.434 [2024-12-09 05:43:51.637840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:57.692 Malloc2 00:48:57.692 Malloc3 00:48:57.692 Malloc4 00:48:57.692 Malloc5 00:48:57.692 Malloc6 00:48:57.692 Malloc7 00:48:57.949 Malloc8 00:48:57.949 Malloc9 00:48:57.949 Malloc10 00:48:57.949 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=699021 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:48:57.950 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:48:58.207 [2024-12-09 05:43:52.176624] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 698841 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 698841 ']' 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 698841 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698841 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698841' 00:49:03.476 killing process with pid 698841 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 698841 00:49:03.476 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 698841 00:49:03.476 [2024-12-09 05:43:57.179397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 [2024-12-09 05:43:57.179503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 [2024-12-09 05:43:57.179548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 [2024-12-09 05:43:57.179572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 [2024-12-09 05:43:57.179606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 [2024-12-09 05:43:57.179624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14becb0 is same with the state(6) to be set 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 [2024-12-09 05:43:57.181003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.476 starting I/O failed: -6 00:49:03.476 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 [2024-12-09 05:43:57.181943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with the state(6) to be set 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 [2024-12-09 05:43:57.181979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with Write completed with error (sct=0, sc=8) 00:49:03.477 the state(6) to be set 00:49:03.477 [2024-12-09 05:43:57.181998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with the state(6) to be set 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 [2024-12-09 05:43:57.182011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with the state(6) to be set 00:49:03.477 [2024-12-09 05:43:57.182024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with Write completed with error (sct=0, sc=8) 00:49:03.477 the state(6) to be set 00:49:03.477 [2024-12-09 05:43:57.182038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be7e0 is same with the state(6) to be set 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 [2024-12-09 05:43:57.182101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 [2024-12-09 05:43:57.183281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.477 starting I/O failed: -6 00:49:03.477 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 [2024-12-09 05:43:57.184348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with starting I/O failed: -6 00:49:03.478 the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 [2024-12-09 05:43:57.184379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 [2024-12-09 05:43:57.184395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 starting I/O failed: -6 00:49:03.478 [2024-12-09 05:43:57.184407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with Write completed with error (sct=0, sc=8) 00:49:03.478 the state(6) to be set 00:49:03.478 starting I/O failed: -6 00:49:03.478 [2024-12-09 05:43:57.184436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 [2024-12-09 05:43:57.184449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 starting I/O failed: -6 00:49:03.478 [2024-12-09 05:43:57.184462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 [2024-12-09 05:43:57.184474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a030 is same with starting I/O failed: -6 00:49:03.478 the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 [2024-12-09 05:43:57.184874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.184963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a520 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.478 NVMe io qpair process completion error 00:49:03.478 [2024-12-09 05:43:57.185642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.185790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a9f0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.186813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1719b60 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.188159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16775d0 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.189514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1677f70 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 [2024-12-09 05:43:57.190409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171c220 is same with the state(6) to be set 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 starting I/O failed: -6 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.478 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.192185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.193302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.193618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with starting I/O failed: -6 00:49:03.479 the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.193659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.193676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.193689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with the state(6) to be set 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.193702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with Write completed with error (sct=0, sc=8) 00:49:03.479 the state(6) to be set 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.193716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678910 is same with the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.194245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with Write completed with error (sct=0, sc=8) 00:49:03.479 the state(6) to be set 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.194287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with Write completed with error (sct=0, sc=8) 00:49:03.479 the state(6) to be set 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.194317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 starting I/O failed: -6 00:49:03.479 [2024-12-09 05:43:57.194330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with the state(6) to be set 00:49:03.479 Write completed with error (sct=0, sc=8) 00:49:03.479 [2024-12-09 05:43:57.194342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.194355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678de0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.194451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.194988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with starting I/O failed: -6 00:49:03.480 the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.195073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16792b0 is same with the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with starting I/O failed: -6 00:49:03.480 the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with Write completed with error (sct=0, sc=8) 00:49:03.480 the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.195571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with Write completed with error (sct=0, sc=8) 00:49:03.480 the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.195609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 [2024-12-09 05:43:57.195621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with starting I/O failed: -6 00:49:03.480 the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.195635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1678440 is same with Write completed with error (sct=0, sc=8) 00:49:03.480 the state(6) to be set 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.196321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.480 NVMe io qpair process completion error 00:49:03.480 [2024-12-09 05:43:57.197168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 [2024-12-09 05:43:57.197319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b460 is same with the state(6) to be set 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 Write completed with error (sct=0, sc=8) 00:49:03.480 starting I/O failed: -6 00:49:03.480 [2024-12-09 05:43:57.197583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.197609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.197623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.197636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.197649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.197662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.197675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b930 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.198166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.481 starting I/O failed: -6 00:49:03.481 starting I/O failed: -6 00:49:03.481 starting I/O failed: -6 00:49:03.481 [2024-12-09 05:43:57.198809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.198839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.198855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.198869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.198882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.198894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 [2024-12-09 05:43:57.198913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 [2024-12-09 05:43:57.198926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 [2024-12-09 05:43:57.198938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.198950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with Write completed with error (sct=0, sc=8) 00:49:03.481 the state(6) to be set 00:49:03.481 [2024-12-09 05:43:57.198964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 [2024-12-09 05:43:57.198976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aac0 is same with the state(6) to be set 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 Write completed with error (sct=0, sc=8) 00:49:03.481 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 [2024-12-09 05:43:57.200588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 [2024-12-09 05:43:57.203118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.482 NVMe io qpair process completion error 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 starting I/O failed: -6 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.482 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 [2024-12-09 05:43:57.204367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 [2024-12-09 05:43:57.205362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 [2024-12-09 05:43:57.206554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.483 Write completed with error (sct=0, sc=8) 00:49:03.483 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 [2024-12-09 05:43:57.208349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.484 NVMe io qpair process completion error 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 [2024-12-09 05:43:57.209785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 [2024-12-09 05:43:57.210728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.484 Write completed with error (sct=0, sc=8) 00:49:03.484 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 [2024-12-09 05:43:57.211996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 [2024-12-09 05:43:57.213765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.485 NVMe io qpair process completion error 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 starting I/O failed: -6 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.485 Write completed with error (sct=0, sc=8) 00:49:03.486 [2024-12-09 05:43:57.215171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.486 starting I/O failed: -6 00:49:03.486 starting I/O failed: -6 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 [2024-12-09 05:43:57.216300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 [2024-12-09 05:43:57.217467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.486 Write completed with error (sct=0, sc=8) 00:49:03.486 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 [2024-12-09 05:43:57.220973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.487 NVMe io qpair process completion error 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 [2024-12-09 05:43:57.222202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.487 starting I/O failed: -6 00:49:03.487 starting I/O failed: -6 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 [2024-12-09 05:43:57.223364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 Write completed with error (sct=0, sc=8) 00:49:03.487 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 [2024-12-09 05:43:57.224575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 [2024-12-09 05:43:57.228260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.488 NVMe io qpair process completion error 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 Write completed with error (sct=0, sc=8) 00:49:03.488 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 [2024-12-09 05:43:57.229492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 [2024-12-09 05:43:57.230589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 [2024-12-09 05:43:57.231737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.489 Write completed with error (sct=0, sc=8) 00:49:03.489 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 [2024-12-09 05:43:57.234129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.490 NVMe io qpair process completion error 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.490 Write completed with error (sct=0, sc=8) 00:49:03.490 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 [2024-12-09 05:43:57.236969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 [2024-12-09 05:43:57.238817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.491 NVMe io qpair process completion error 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 Write completed with error (sct=0, sc=8) 00:49:03.491 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 [2024-12-09 05:43:57.240048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 [2024-12-09 05:43:57.241141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 [2024-12-09 05:43:57.242330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.492 Write completed with error (sct=0, sc=8) 00:49:03.492 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 Write completed with error (sct=0, sc=8) 00:49:03.493 starting I/O failed: -6 00:49:03.493 [2024-12-09 05:43:57.244699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:49:03.493 NVMe io qpair process completion error 00:49:03.493 Initializing NVMe Controllers 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:49:03.493 Controller IO queue size 128, less than required. 00:49:03.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:49:03.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:49:03.493 Initialization complete. Launching workers. 00:49:03.493 ======================================================== 00:49:03.493 Latency(us) 00:49:03.493 Device Information : IOPS MiB/s Average min max 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1859.88 79.92 68842.17 923.86 120245.97 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1851.12 79.54 69194.51 934.79 124237.67 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1862.29 80.02 68807.24 768.02 125992.22 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1871.28 80.41 68528.70 801.09 117844.94 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1871.28 80.41 68581.83 1109.36 116522.11 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1828.54 78.57 69387.18 1151.43 116717.16 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1767.61 75.95 72572.90 731.76 135399.15 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1786.02 76.74 71850.50 1113.13 135895.63 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1750.96 75.24 72465.38 1150.90 115907.68 00:49:03.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1759.28 75.59 72165.12 847.44 117265.13 00:49:03.493 ======================================================== 00:49:03.493 Total : 18208.27 782.39 70196.91 731.76 135895.63 00:49:03.493 00:49:03.493 [2024-12-09 05:43:57.251329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b6b0 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188dae0 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188cc50 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c2c0 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188b9e0 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d720 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188bd10 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c5f0 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d900 is same with the state(6) to be set 00:49:03.493 [2024-12-09 05:43:57.251913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c920 is same with the state(6) to be set 00:49:03.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:49:03.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 699021 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 699021 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 699021 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:04.689 rmmod nvme_tcp 00:49:04.689 rmmod nvme_fabrics 00:49:04.689 rmmod nvme_keyring 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 698841 ']' 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 698841 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 698841 ']' 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 698841 00:49:04.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (698841) - No such process 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 698841 is not found' 00:49:04.689 Process with pid 698841 is not found 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:04.689 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:06.591 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:06.591 00:49:06.591 real 0m9.909s 00:49:06.591 user 0m24.033s 00:49:06.591 sys 0m5.616s 00:49:06.591 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:06.591 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:49:06.591 ************************************ 00:49:06.591 END TEST nvmf_shutdown_tc4 00:49:06.591 ************************************ 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:49:06.850 00:49:06.850 real 0m37.710s 00:49:06.850 user 1m41.566s 00:49:06.850 sys 0m12.026s 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:49:06.850 ************************************ 00:49:06.850 END TEST nvmf_shutdown 00:49:06.850 ************************************ 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:06.850 ************************************ 00:49:06.850 START TEST nvmf_nsid 00:49:06.850 ************************************ 00:49:06.850 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:49:06.850 * Looking for test storage... 00:49:06.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:49:06.851 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:06.851 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:49:06.851 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.851 --rc genhtml_branch_coverage=1 00:49:06.851 --rc genhtml_function_coverage=1 00:49:06.851 --rc genhtml_legend=1 00:49:06.851 --rc geninfo_all_blocks=1 00:49:06.851 --rc geninfo_unexecuted_blocks=1 00:49:06.851 00:49:06.851 ' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.851 --rc genhtml_branch_coverage=1 00:49:06.851 --rc genhtml_function_coverage=1 00:49:06.851 --rc genhtml_legend=1 00:49:06.851 --rc geninfo_all_blocks=1 00:49:06.851 --rc geninfo_unexecuted_blocks=1 00:49:06.851 00:49:06.851 ' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.851 --rc genhtml_branch_coverage=1 00:49:06.851 --rc genhtml_function_coverage=1 00:49:06.851 --rc genhtml_legend=1 00:49:06.851 --rc geninfo_all_blocks=1 00:49:06.851 --rc geninfo_unexecuted_blocks=1 00:49:06.851 00:49:06.851 ' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:06.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:06.851 --rc genhtml_branch_coverage=1 00:49:06.851 --rc genhtml_function_coverage=1 00:49:06.851 --rc genhtml_legend=1 00:49:06.851 --rc geninfo_all_blocks=1 00:49:06.851 --rc geninfo_unexecuted_blocks=1 00:49:06.851 00:49:06.851 ' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:06.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:49:06.851 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:09.381 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:09.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:09.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:09.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:09.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:09.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:09.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:49:09.382 00:49:09.382 --- 10.0.0.2 ping statistics --- 00:49:09.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.382 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:09.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:09.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:49:09.382 00:49:09.382 --- 10.0.0.1 ping statistics --- 00:49:09.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:09.382 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=701765 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 701765 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 701765 ']' 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:09.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:09.382 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.382 [2024-12-09 05:44:03.448376] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:09.382 [2024-12-09 05:44:03.448471] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:09.382 [2024-12-09 05:44:03.519536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:09.382 [2024-12-09 05:44:03.571560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:09.382 [2024-12-09 05:44:03.571623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:09.382 [2024-12-09 05:44:03.571645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:09.383 [2024-12-09 05:44:03.571664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:09.383 [2024-12-09 05:44:03.571674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:09.383 [2024-12-09 05:44:03.572235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=701790 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ea733a57-8f1a-40a2-a928-b5708ba0ace9 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=02e3a866-855a-4c7c-8375-fab3414434e9 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5c8e21ff-2a23-44ea-8572-54a3c3ae9ff0 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.640 null0 00:49:09.640 null1 00:49:09.640 null2 00:49:09.640 [2024-12-09 05:44:03.749288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:09.640 [2024-12-09 05:44:03.769954] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:09.640 [2024-12-09 05:44:03.770035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701790 ] 00:49:09.640 [2024-12-09 05:44:03.773577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 701790 /var/tmp/tgt2.sock 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 701790 ']' 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:49:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:09.640 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:09.640 [2024-12-09 05:44:03.844934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:09.897 [2024-12-09 05:44:03.902847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:10.154 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:10.154 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:49:10.154 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:49:10.412 [2024-12-09 05:44:04.568759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:10.412 [2024-12-09 05:44:04.584951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:49:10.412 nvme0n1 nvme0n2 00:49:10.412 nvme1n1 00:49:10.412 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:49:10.412 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:49:10.412 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:10.977 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:49:11.234 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:49:11.234 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:49:11.234 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ea733a57-8f1a-40a2-a928-b5708ba0ace9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ea733a578f1a40a2a928b5708ba0ace9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EA733A578F1A40A2A928B5708BA0ACE9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EA733A578F1A40A2A928B5708BA0ACE9 == \E\A\7\3\3\A\5\7\8\F\1\A\4\0\A\2\A\9\2\8\B\5\7\0\8\B\A\0\A\C\E\9 ]] 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 02e3a866-855a-4c7c-8375-fab3414434e9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=02e3a866855a4c7c8375fab3414434e9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 02E3A866855A4C7C8375FAB3414434E9 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 02E3A866855A4C7C8375FAB3414434E9 == \0\2\E\3\A\8\6\6\8\5\5\A\4\C\7\C\8\3\7\5\F\A\B\3\4\1\4\4\3\4\E\9 ]] 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5c8e21ff-2a23-44ea-8572-54a3c3ae9ff0 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:49:12.166 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5c8e21ff2a2344ea857254a3c3ae9ff0 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5C8E21FF2A2344EA857254A3C3AE9FF0 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5C8E21FF2A2344EA857254A3C3AE9FF0 == \5\C\8\E\2\1\F\F\2\A\2\3\4\4\E\A\8\5\7\2\5\4\A\3\C\3\A\E\9\F\F\0 ]] 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 701790 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 701790 ']' 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 701790 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701790 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701790' 00:49:12.424 killing process with pid 701790 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 701790 00:49:12.424 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 701790 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:12.988 rmmod nvme_tcp 00:49:12.988 rmmod nvme_fabrics 00:49:12.988 rmmod nvme_keyring 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 701765 ']' 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 701765 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 701765 ']' 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 701765 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701765 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701765' 00:49:12.988 killing process with pid 701765 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 701765 00:49:12.988 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 701765 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:13.246 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:15.782 00:49:15.782 real 0m8.581s 00:49:15.782 user 0m8.488s 00:49:15.782 sys 0m2.791s 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:49:15.782 ************************************ 00:49:15.782 END TEST nvmf_nsid 00:49:15.782 ************************************ 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:49:15.782 00:49:15.782 real 11m41.084s 00:49:15.782 user 27m29.351s 00:49:15.782 sys 2m47.353s 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:15.782 05:44:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:49:15.782 ************************************ 00:49:15.782 END TEST nvmf_target_extra 00:49:15.782 ************************************ 00:49:15.782 05:44:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:49:15.782 05:44:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:15.782 05:44:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:15.782 05:44:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:15.782 ************************************ 00:49:15.782 START TEST nvmf_host 00:49:15.782 ************************************ 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:49:15.782 * Looking for test storage... 00:49:15.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.782 --rc genhtml_branch_coverage=1 00:49:15.782 --rc genhtml_function_coverage=1 00:49:15.782 --rc genhtml_legend=1 00:49:15.782 --rc geninfo_all_blocks=1 00:49:15.782 --rc geninfo_unexecuted_blocks=1 00:49:15.782 00:49:15.782 ' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.782 --rc genhtml_branch_coverage=1 00:49:15.782 --rc genhtml_function_coverage=1 00:49:15.782 --rc genhtml_legend=1 00:49:15.782 --rc geninfo_all_blocks=1 00:49:15.782 --rc geninfo_unexecuted_blocks=1 00:49:15.782 00:49:15.782 ' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.782 --rc genhtml_branch_coverage=1 00:49:15.782 --rc genhtml_function_coverage=1 00:49:15.782 --rc genhtml_legend=1 00:49:15.782 --rc geninfo_all_blocks=1 00:49:15.782 --rc geninfo_unexecuted_blocks=1 00:49:15.782 00:49:15.782 ' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:15.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.782 --rc genhtml_branch_coverage=1 00:49:15.782 --rc genhtml_function_coverage=1 00:49:15.782 --rc genhtml_legend=1 00:49:15.782 --rc geninfo_all_blocks=1 00:49:15.782 --rc geninfo_unexecuted_blocks=1 00:49:15.782 00:49:15.782 ' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.782 05:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:15.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:15.783 ************************************ 00:49:15.783 START TEST nvmf_multicontroller 00:49:15.783 ************************************ 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:49:15.783 * Looking for test storage... 00:49:15.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.783 --rc genhtml_branch_coverage=1 00:49:15.783 --rc genhtml_function_coverage=1 00:49:15.783 --rc genhtml_legend=1 00:49:15.783 --rc geninfo_all_blocks=1 00:49:15.783 --rc geninfo_unexecuted_blocks=1 00:49:15.783 00:49:15.783 ' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.783 --rc genhtml_branch_coverage=1 00:49:15.783 --rc genhtml_function_coverage=1 00:49:15.783 --rc genhtml_legend=1 00:49:15.783 --rc geninfo_all_blocks=1 00:49:15.783 --rc geninfo_unexecuted_blocks=1 00:49:15.783 00:49:15.783 ' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.783 --rc genhtml_branch_coverage=1 00:49:15.783 --rc genhtml_function_coverage=1 00:49:15.783 --rc genhtml_legend=1 00:49:15.783 --rc geninfo_all_blocks=1 00:49:15.783 --rc geninfo_unexecuted_blocks=1 00:49:15.783 00:49:15.783 ' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:15.783 --rc genhtml_branch_coverage=1 00:49:15.783 --rc genhtml_function_coverage=1 00:49:15.783 --rc genhtml_legend=1 00:49:15.783 --rc geninfo_all_blocks=1 00:49:15.783 --rc geninfo_unexecuted_blocks=1 00:49:15.783 00:49:15.783 ' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:15.783 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:15.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:49:15.784 05:44:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.315 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:18.316 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:18.316 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:18.316 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:18.316 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:18.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:18.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:49:18.316 00:49:18.316 --- 10.0.0.2 ping statistics --- 00:49:18.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:18.316 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:18.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:18.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:49:18.316 00:49:18.316 --- 10.0.0.1 ping statistics --- 00:49:18.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:18.316 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:49:18.316 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=704319 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 704319 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 704319 ']' 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:18.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:18.317 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.317 [2024-12-09 05:44:12.308914] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:18.317 [2024-12-09 05:44:12.308988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:18.317 [2024-12-09 05:44:12.383641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:18.317 [2024-12-09 05:44:12.444082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:18.317 [2024-12-09 05:44:12.444152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:18.317 [2024-12-09 05:44:12.444181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:18.317 [2024-12-09 05:44:12.444193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:18.317 [2024-12-09 05:44:12.444202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:18.317 [2024-12-09 05:44:12.445761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:18.317 [2024-12-09 05:44:12.445830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:18.317 [2024-12-09 05:44:12.445833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 [2024-12-09 05:44:12.604641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 Malloc0 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 [2024-12-09 05:44:12.670546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 [2024-12-09 05:44:12.678383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 Malloc1 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=704372 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 704372 /var/tmp/bdevperf.sock 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 704372 ']' 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:18.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:18.574 05:44:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:18.831 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:18.831 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:49:18.831 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:49:18.831 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:18.831 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.088 NVMe0n1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.088 1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.088 request: 00:49:19.088 { 00:49:19.088 "name": "NVMe0", 00:49:19.088 "trtype": "tcp", 00:49:19.088 "traddr": "10.0.0.2", 00:49:19.088 "adrfam": "ipv4", 00:49:19.088 "trsvcid": "4420", 00:49:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:49:19.088 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:49:19.088 "hostaddr": "10.0.0.1", 00:49:19.088 "prchk_reftag": false, 00:49:19.088 "prchk_guard": false, 00:49:19.088 "hdgst": false, 00:49:19.088 "ddgst": false, 00:49:19.088 "allow_unrecognized_csi": false, 00:49:19.088 "method": "bdev_nvme_attach_controller", 00:49:19.088 "req_id": 1 00:49:19.088 } 00:49:19.088 Got JSON-RPC error response 00:49:19.088 response: 00:49:19.088 { 00:49:19.088 "code": -114, 00:49:19.088 "message": "A controller named NVMe0 already exists with the specified network path" 00:49:19.088 } 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.088 request: 00:49:19.088 { 00:49:19.088 "name": "NVMe0", 00:49:19.088 "trtype": "tcp", 00:49:19.088 "traddr": "10.0.0.2", 00:49:19.088 "adrfam": "ipv4", 00:49:19.088 "trsvcid": "4420", 00:49:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:49:19.088 "hostaddr": "10.0.0.1", 00:49:19.088 "prchk_reftag": false, 00:49:19.088 "prchk_guard": false, 00:49:19.088 "hdgst": false, 00:49:19.088 "ddgst": false, 00:49:19.088 "allow_unrecognized_csi": false, 00:49:19.088 "method": "bdev_nvme_attach_controller", 00:49:19.088 "req_id": 1 00:49:19.088 } 00:49:19.088 Got JSON-RPC error response 00:49:19.088 response: 00:49:19.088 { 00:49:19.088 "code": -114, 00:49:19.088 "message": "A controller named NVMe0 already exists with the specified network path" 00:49:19.088 } 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.088 request: 00:49:19.088 { 00:49:19.088 "name": "NVMe0", 00:49:19.088 "trtype": "tcp", 00:49:19.088 "traddr": "10.0.0.2", 00:49:19.088 "adrfam": "ipv4", 00:49:19.088 "trsvcid": "4420", 00:49:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:49:19.088 "hostaddr": "10.0.0.1", 00:49:19.088 "prchk_reftag": false, 00:49:19.088 "prchk_guard": false, 00:49:19.088 "hdgst": false, 00:49:19.088 "ddgst": false, 00:49:19.088 "multipath": "disable", 00:49:19.088 "allow_unrecognized_csi": false, 00:49:19.088 "method": "bdev_nvme_attach_controller", 00:49:19.088 "req_id": 1 00:49:19.088 } 00:49:19.088 Got JSON-RPC error response 00:49:19.088 response: 00:49:19.088 { 00:49:19.088 "code": -114, 00:49:19.088 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:49:19.088 } 00:49:19.088 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.089 request: 00:49:19.089 { 00:49:19.089 "name": "NVMe0", 00:49:19.089 "trtype": "tcp", 00:49:19.089 "traddr": "10.0.0.2", 00:49:19.089 "adrfam": "ipv4", 00:49:19.089 "trsvcid": "4420", 00:49:19.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:49:19.089 "hostaddr": "10.0.0.1", 00:49:19.089 "prchk_reftag": false, 00:49:19.089 "prchk_guard": false, 00:49:19.089 "hdgst": false, 00:49:19.089 "ddgst": false, 00:49:19.089 "multipath": "failover", 00:49:19.089 "allow_unrecognized_csi": false, 00:49:19.089 "method": "bdev_nvme_attach_controller", 00:49:19.089 "req_id": 1 00:49:19.089 } 00:49:19.089 Got JSON-RPC error response 00:49:19.089 response: 00:49:19.089 { 00:49:19.089 "code": -114, 00:49:19.089 "message": "A controller named NVMe0 already exists with the specified network path" 00:49:19.089 } 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.089 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.345 NVMe0n1 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.345 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.602 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:49:19.602 05:44:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:20.535 { 00:49:20.535 "results": [ 00:49:20.535 { 00:49:20.535 "job": "NVMe0n1", 00:49:20.535 "core_mask": "0x1", 00:49:20.535 "workload": "write", 00:49:20.535 "status": "finished", 00:49:20.535 "queue_depth": 128, 00:49:20.535 "io_size": 4096, 00:49:20.535 "runtime": 1.004186, 00:49:20.535 "iops": 18593.16899458865, 00:49:20.535 "mibps": 72.62956638511191, 00:49:20.535 "io_failed": 0, 00:49:20.535 "io_timeout": 0, 00:49:20.535 "avg_latency_us": 6873.141901899757, 00:49:20.535 "min_latency_us": 4660.337777777778, 00:49:20.535 "max_latency_us": 12087.75111111111 00:49:20.535 } 00:49:20.535 ], 00:49:20.535 "core_count": 1 00:49:20.535 } 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 704372 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 704372 ']' 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 704372 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:49:20.535 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704372 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704372' 00:49:20.796 killing process with pid 704372 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 704372 00:49:20.796 05:44:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 704372 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:49:21.055 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:49:21.055 [2024-12-09 05:44:12.786873] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:21.055 [2024-12-09 05:44:12.786956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704372 ] 00:49:21.055 [2024-12-09 05:44:12.856456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:21.055 [2024-12-09 05:44:12.915192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:21.055 [2024-12-09 05:44:13.602386] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name d6ba44c7-cc8f-4bd2-acb3-b3c3ff236d2e already exists 00:49:21.055 [2024-12-09 05:44:13.602424] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:d6ba44c7-cc8f-4bd2-acb3-b3c3ff236d2e alias for bdev NVMe1n1 00:49:21.055 [2024-12-09 05:44:13.602439] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:49:21.055 Running I/O for 1 seconds... 00:49:21.055 18543.00 IOPS, 72.43 MiB/s 00:49:21.055 Latency(us) 00:49:21.055 [2024-12-09T04:44:15.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:21.055 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:49:21.055 NVMe0n1 : 1.00 18593.17 72.63 0.00 0.00 6873.14 4660.34 12087.75 00:49:21.055 [2024-12-09T04:44:15.280Z] =================================================================================================================== 00:49:21.055 [2024-12-09T04:44:15.280Z] Total : 18593.17 72.63 0.00 0.00 6873.14 4660.34 12087.75 00:49:21.055 Received shutdown signal, test time was about 1.000000 seconds 00:49:21.055 00:49:21.055 Latency(us) 00:49:21.055 [2024-12-09T04:44:15.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:21.055 [2024-12-09T04:44:15.280Z] =================================================================================================================== 00:49:21.055 [2024-12-09T04:44:15.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:21.055 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:21.055 rmmod nvme_tcp 00:49:21.055 rmmod nvme_fabrics 00:49:21.055 rmmod nvme_keyring 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 704319 ']' 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 704319 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 704319 ']' 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 704319 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704319 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704319' 00:49:21.055 killing process with pid 704319 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 704319 00:49:21.055 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 704319 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:21.312 05:44:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:23.849 00:49:23.849 real 0m7.744s 00:49:23.849 user 0m12.062s 00:49:23.849 sys 0m2.449s 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:49:23.849 ************************************ 00:49:23.849 END TEST nvmf_multicontroller 00:49:23.849 ************************************ 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:23.849 ************************************ 00:49:23.849 START TEST nvmf_aer 00:49:23.849 ************************************ 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:49:23.849 * Looking for test storage... 00:49:23.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:23.849 --rc genhtml_branch_coverage=1 00:49:23.849 --rc genhtml_function_coverage=1 00:49:23.849 --rc genhtml_legend=1 00:49:23.849 --rc geninfo_all_blocks=1 00:49:23.849 --rc geninfo_unexecuted_blocks=1 00:49:23.849 00:49:23.849 ' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:23.849 --rc genhtml_branch_coverage=1 00:49:23.849 --rc genhtml_function_coverage=1 00:49:23.849 --rc genhtml_legend=1 00:49:23.849 --rc geninfo_all_blocks=1 00:49:23.849 --rc geninfo_unexecuted_blocks=1 00:49:23.849 00:49:23.849 ' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:23.849 --rc genhtml_branch_coverage=1 00:49:23.849 --rc genhtml_function_coverage=1 00:49:23.849 --rc genhtml_legend=1 00:49:23.849 --rc geninfo_all_blocks=1 00:49:23.849 --rc geninfo_unexecuted_blocks=1 00:49:23.849 00:49:23.849 ' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:23.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:23.849 --rc genhtml_branch_coverage=1 00:49:23.849 --rc genhtml_function_coverage=1 00:49:23.849 --rc genhtml_legend=1 00:49:23.849 --rc geninfo_all_blocks=1 00:49:23.849 --rc geninfo_unexecuted_blocks=1 00:49:23.849 00:49:23.849 ' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:23.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:49:23.849 05:44:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:25.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:25.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:25.753 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:25.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:25.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:25.754 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:26.012 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:26.012 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:26.012 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:26.012 05:44:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:26.012 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:26.012 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:26.012 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:26.012 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:26.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:26.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:49:26.012 00:49:26.012 --- 10.0.0.2 ping statistics --- 00:49:26.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:26.012 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:26.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:26.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:49:26.013 00:49:26.013 --- 10.0.0.1 ping statistics --- 00:49:26.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:26.013 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=706665 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 706665 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 706665 ']' 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:26.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:26.013 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.013 [2024-12-09 05:44:20.231308] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:26.013 [2024-12-09 05:44:20.231393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:26.271 [2024-12-09 05:44:20.305818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:26.271 [2024-12-09 05:44:20.366653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:26.271 [2024-12-09 05:44:20.366713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:26.271 [2024-12-09 05:44:20.366738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:26.271 [2024-12-09 05:44:20.366748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:26.271 [2024-12-09 05:44:20.366758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:26.271 [2024-12-09 05:44:20.368320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:26.271 [2024-12-09 05:44:20.368384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:26.271 [2024-12-09 05:44:20.368410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:26.271 [2024-12-09 05:44:20.368413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:26.271 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:26.271 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:49:26.272 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:26.272 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:26.272 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 [2024-12-09 05:44:20.509816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 Malloc0 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 [2024-12-09 05:44:20.575346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.530 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.530 [ 00:49:26.530 { 00:49:26.530 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:26.530 "subtype": "Discovery", 00:49:26.530 "listen_addresses": [], 00:49:26.530 "allow_any_host": true, 00:49:26.530 "hosts": [] 00:49:26.530 }, 00:49:26.530 { 00:49:26.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:49:26.530 "subtype": "NVMe", 00:49:26.530 "listen_addresses": [ 00:49:26.531 { 00:49:26.531 "trtype": "TCP", 00:49:26.531 "adrfam": "IPv4", 00:49:26.531 "traddr": "10.0.0.2", 00:49:26.531 "trsvcid": "4420" 00:49:26.531 } 00:49:26.531 ], 00:49:26.531 "allow_any_host": true, 00:49:26.531 "hosts": [], 00:49:26.531 "serial_number": "SPDK00000000000001", 00:49:26.531 "model_number": "SPDK bdev Controller", 00:49:26.531 "max_namespaces": 2, 00:49:26.531 "min_cntlid": 1, 00:49:26.531 "max_cntlid": 65519, 00:49:26.531 "namespaces": [ 00:49:26.531 { 00:49:26.531 "nsid": 1, 00:49:26.531 "bdev_name": "Malloc0", 00:49:26.531 "name": "Malloc0", 00:49:26.531 "nguid": "0BA863947DF149CABA8D2D153957A42A", 00:49:26.531 "uuid": "0ba86394-7df1-49ca-ba8d-2d153957a42a" 00:49:26.531 } 00:49:26.531 ] 00:49:26.531 } 00:49:26.531 ] 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=706741 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:49:26.531 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.789 Malloc1 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.789 [ 00:49:26.789 { 00:49:26.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:26.789 "subtype": "Discovery", 00:49:26.789 "listen_addresses": [], 00:49:26.789 "allow_any_host": true, 00:49:26.789 "hosts": [] 00:49:26.789 }, 00:49:26.789 { 00:49:26.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:49:26.789 "subtype": "NVMe", 00:49:26.789 "listen_addresses": [ 00:49:26.789 { 00:49:26.789 "trtype": "TCP", 00:49:26.789 "adrfam": "IPv4", 00:49:26.789 "traddr": "10.0.0.2", 00:49:26.789 "trsvcid": "4420" 00:49:26.789 } 00:49:26.789 ], 00:49:26.789 "allow_any_host": true, 00:49:26.789 "hosts": [], 00:49:26.789 "serial_number": "SPDK00000000000001", 00:49:26.789 "model_number": "SPDK bdev Controller", 00:49:26.789 "max_namespaces": 2, 00:49:26.789 "min_cntlid": 1, 00:49:26.789 "max_cntlid": 65519, 00:49:26.789 "namespaces": [ 00:49:26.789 { 00:49:26.789 "nsid": 1, 00:49:26.789 "bdev_name": "Malloc0", 00:49:26.789 "name": "Malloc0", 00:49:26.789 "nguid": "0BA863947DF149CABA8D2D153957A42A", 00:49:26.789 "uuid": "0ba86394-7df1-49ca-ba8d-2d153957a42a" 00:49:26.789 }, 00:49:26.789 { 00:49:26.789 "nsid": 2, 00:49:26.789 "bdev_name": "Malloc1", 00:49:26.789 "name": "Malloc1", 00:49:26.789 "nguid": "752BACA648E34B52AE70CE07B95BBBD8", 00:49:26.789 "uuid": "752baca6-48e3-4b52-ae70-ce07b95bbbd8" 00:49:26.789 } 00:49:26.789 ] 00:49:26.789 } 00:49:26.789 ] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 706741 00:49:26.789 Asynchronous Event Request test 00:49:26.789 Attaching to 10.0.0.2 00:49:26.789 Attached to 10.0.0.2 00:49:26.789 Registering asynchronous event callbacks... 00:49:26.789 Starting namespace attribute notice tests for all controllers... 00:49:26.789 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:49:26.789 aer_cb - Changed Namespace 00:49:26.789 Cleaning up... 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:26.789 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:26.790 05:44:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:26.790 rmmod nvme_tcp 00:49:26.790 rmmod nvme_fabrics 00:49:26.790 rmmod nvme_keyring 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 706665 ']' 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 706665 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 706665 ']' 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 706665 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 706665 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 706665' 00:49:27.048 killing process with pid 706665 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 706665 00:49:27.048 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 706665 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:27.309 05:44:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:29.213 00:49:29.213 real 0m5.825s 00:49:29.213 user 0m4.545s 00:49:29.213 sys 0m2.078s 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:49:29.213 ************************************ 00:49:29.213 END TEST nvmf_aer 00:49:29.213 ************************************ 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:29.213 ************************************ 00:49:29.213 START TEST nvmf_async_init 00:49:29.213 ************************************ 00:49:29.213 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:49:29.472 * Looking for test storage... 00:49:29.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:29.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:29.472 --rc genhtml_branch_coverage=1 00:49:29.472 --rc genhtml_function_coverage=1 00:49:29.472 --rc genhtml_legend=1 00:49:29.472 --rc geninfo_all_blocks=1 00:49:29.472 --rc geninfo_unexecuted_blocks=1 00:49:29.472 00:49:29.472 ' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:29.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:29.472 --rc genhtml_branch_coverage=1 00:49:29.472 --rc genhtml_function_coverage=1 00:49:29.472 --rc genhtml_legend=1 00:49:29.472 --rc geninfo_all_blocks=1 00:49:29.472 --rc geninfo_unexecuted_blocks=1 00:49:29.472 00:49:29.472 ' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:29.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:29.472 --rc genhtml_branch_coverage=1 00:49:29.472 --rc genhtml_function_coverage=1 00:49:29.472 --rc genhtml_legend=1 00:49:29.472 --rc geninfo_all_blocks=1 00:49:29.472 --rc geninfo_unexecuted_blocks=1 00:49:29.472 00:49:29.472 ' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:29.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:29.472 --rc genhtml_branch_coverage=1 00:49:29.472 --rc genhtml_function_coverage=1 00:49:29.472 --rc genhtml_legend=1 00:49:29.472 --rc geninfo_all_blocks=1 00:49:29.472 --rc geninfo_unexecuted_blocks=1 00:49:29.472 00:49:29.472 ' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:29.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:49:29.472 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d5f4a0835f014c7d99327a408e8aa9f8 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:49:29.473 05:44:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:32.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:32.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:32.004 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:32.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:32.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:32.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:32.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:49:32.005 00:49:32.005 --- 10.0.0.2 ping statistics --- 00:49:32.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:32.005 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:32.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:32.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:49:32.005 00:49:32.005 --- 10.0.0.1 ping statistics --- 00:49:32.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:32.005 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=708736 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 708736 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 708736 ']' 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:32.005 05:44:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 [2024-12-09 05:44:25.895903] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:32.005 [2024-12-09 05:44:25.895994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:32.005 [2024-12-09 05:44:25.965833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:32.005 [2024-12-09 05:44:26.023916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:32.005 [2024-12-09 05:44:26.023956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:32.005 [2024-12-09 05:44:26.023983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:32.005 [2024-12-09 05:44:26.023995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:32.005 [2024-12-09 05:44:26.024005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:32.005 [2024-12-09 05:44:26.024503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 [2024-12-09 05:44:26.165187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 null0 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d5f4a0835f014c7d99327a408e8aa9f8 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.005 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.005 [2024-12-09 05:44:26.205446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:32.006 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.006 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:49:32.006 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.006 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.320 nvme0n1 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.320 [ 00:49:32.320 { 00:49:32.320 "name": "nvme0n1", 00:49:32.320 "aliases": [ 00:49:32.320 "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8" 00:49:32.320 ], 00:49:32.320 "product_name": "NVMe disk", 00:49:32.320 "block_size": 512, 00:49:32.320 "num_blocks": 2097152, 00:49:32.320 "uuid": "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8", 00:49:32.320 "numa_id": 0, 00:49:32.320 "assigned_rate_limits": { 00:49:32.320 "rw_ios_per_sec": 0, 00:49:32.320 "rw_mbytes_per_sec": 0, 00:49:32.320 "r_mbytes_per_sec": 0, 00:49:32.320 "w_mbytes_per_sec": 0 00:49:32.320 }, 00:49:32.320 "claimed": false, 00:49:32.320 "zoned": false, 00:49:32.320 "supported_io_types": { 00:49:32.320 "read": true, 00:49:32.320 "write": true, 00:49:32.320 "unmap": false, 00:49:32.320 "flush": true, 00:49:32.320 "reset": true, 00:49:32.320 "nvme_admin": true, 00:49:32.320 "nvme_io": true, 00:49:32.320 "nvme_io_md": false, 00:49:32.320 "write_zeroes": true, 00:49:32.320 "zcopy": false, 00:49:32.320 "get_zone_info": false, 00:49:32.320 "zone_management": false, 00:49:32.320 "zone_append": false, 00:49:32.320 "compare": true, 00:49:32.320 "compare_and_write": true, 00:49:32.320 "abort": true, 00:49:32.320 "seek_hole": false, 00:49:32.320 "seek_data": false, 00:49:32.320 "copy": true, 00:49:32.320 "nvme_iov_md": false 00:49:32.320 }, 00:49:32.320 "memory_domains": [ 00:49:32.320 { 00:49:32.320 "dma_device_id": "system", 00:49:32.320 "dma_device_type": 1 00:49:32.320 } 00:49:32.320 ], 00:49:32.320 "driver_specific": { 00:49:32.320 "nvme": [ 00:49:32.320 { 00:49:32.320 "trid": { 00:49:32.320 "trtype": "TCP", 00:49:32.320 "adrfam": "IPv4", 00:49:32.320 "traddr": "10.0.0.2", 00:49:32.320 "trsvcid": "4420", 00:49:32.320 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:49:32.320 }, 00:49:32.320 "ctrlr_data": { 00:49:32.320 "cntlid": 1, 00:49:32.320 "vendor_id": "0x8086", 00:49:32.320 "model_number": "SPDK bdev Controller", 00:49:32.320 "serial_number": "00000000000000000000", 00:49:32.320 "firmware_revision": "25.01", 00:49:32.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:32.320 "oacs": { 00:49:32.320 "security": 0, 00:49:32.320 "format": 0, 00:49:32.320 "firmware": 0, 00:49:32.320 "ns_manage": 0 00:49:32.320 }, 00:49:32.320 "multi_ctrlr": true, 00:49:32.320 "ana_reporting": false 00:49:32.320 }, 00:49:32.320 "vs": { 00:49:32.320 "nvme_version": "1.3" 00:49:32.320 }, 00:49:32.320 "ns_data": { 00:49:32.320 "id": 1, 00:49:32.320 "can_share": true 00:49:32.320 } 00:49:32.320 } 00:49:32.320 ], 00:49:32.320 "mp_policy": "active_passive" 00:49:32.320 } 00:49:32.320 } 00:49:32.320 ] 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.320 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.320 [2024-12-09 05:44:26.453964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:49:32.320 [2024-12-09 05:44:26.454054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b39a0 (9): Bad file descriptor 00:49:32.612 [2024-12-09 05:44:26.586418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:49:32.612 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.612 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:49:32.612 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.612 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.612 [ 00:49:32.612 { 00:49:32.612 "name": "nvme0n1", 00:49:32.612 "aliases": [ 00:49:32.612 "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8" 00:49:32.612 ], 00:49:32.612 "product_name": "NVMe disk", 00:49:32.612 "block_size": 512, 00:49:32.612 "num_blocks": 2097152, 00:49:32.612 "uuid": "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8", 00:49:32.612 "numa_id": 0, 00:49:32.612 "assigned_rate_limits": { 00:49:32.612 "rw_ios_per_sec": 0, 00:49:32.612 "rw_mbytes_per_sec": 0, 00:49:32.612 "r_mbytes_per_sec": 0, 00:49:32.612 "w_mbytes_per_sec": 0 00:49:32.612 }, 00:49:32.612 "claimed": false, 00:49:32.612 "zoned": false, 00:49:32.612 "supported_io_types": { 00:49:32.612 "read": true, 00:49:32.612 "write": true, 00:49:32.612 "unmap": false, 00:49:32.612 "flush": true, 00:49:32.612 "reset": true, 00:49:32.612 "nvme_admin": true, 00:49:32.612 "nvme_io": true, 00:49:32.612 "nvme_io_md": false, 00:49:32.612 "write_zeroes": true, 00:49:32.612 "zcopy": false, 00:49:32.612 "get_zone_info": false, 00:49:32.612 "zone_management": false, 00:49:32.612 "zone_append": false, 00:49:32.612 "compare": true, 00:49:32.613 "compare_and_write": true, 00:49:32.613 "abort": true, 00:49:32.613 "seek_hole": false, 00:49:32.613 "seek_data": false, 00:49:32.613 "copy": true, 00:49:32.613 "nvme_iov_md": false 00:49:32.613 }, 00:49:32.613 "memory_domains": [ 00:49:32.613 { 00:49:32.613 "dma_device_id": "system", 00:49:32.613 "dma_device_type": 1 00:49:32.613 } 00:49:32.613 ], 00:49:32.613 "driver_specific": { 00:49:32.613 "nvme": [ 00:49:32.613 { 00:49:32.613 "trid": { 00:49:32.613 "trtype": "TCP", 00:49:32.613 "adrfam": "IPv4", 00:49:32.613 "traddr": "10.0.0.2", 00:49:32.613 "trsvcid": "4420", 00:49:32.613 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:49:32.613 }, 00:49:32.613 "ctrlr_data": { 00:49:32.613 "cntlid": 2, 00:49:32.613 "vendor_id": "0x8086", 00:49:32.613 "model_number": "SPDK bdev Controller", 00:49:32.613 "serial_number": "00000000000000000000", 00:49:32.613 "firmware_revision": "25.01", 00:49:32.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:32.613 "oacs": { 00:49:32.613 "security": 0, 00:49:32.613 "format": 0, 00:49:32.613 "firmware": 0, 00:49:32.613 "ns_manage": 0 00:49:32.613 }, 00:49:32.613 "multi_ctrlr": true, 00:49:32.613 "ana_reporting": false 00:49:32.613 }, 00:49:32.613 "vs": { 00:49:32.613 "nvme_version": "1.3" 00:49:32.613 }, 00:49:32.613 "ns_data": { 00:49:32.613 "id": 1, 00:49:32.613 "can_share": true 00:49:32.613 } 00:49:32.613 } 00:49:32.613 ], 00:49:32.613 "mp_policy": "active_passive" 00:49:32.613 } 00:49:32.613 } 00:49:32.613 ] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.g9cl3peHpm 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.g9cl3peHpm 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.g9cl3peHpm 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 [2024-12-09 05:44:26.642585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:32.613 [2024-12-09 05:44:26.642740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 [2024-12-09 05:44:26.658643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:32.613 nvme0n1 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 [ 00:49:32.613 { 00:49:32.613 "name": "nvme0n1", 00:49:32.613 "aliases": [ 00:49:32.613 "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8" 00:49:32.613 ], 00:49:32.613 "product_name": "NVMe disk", 00:49:32.613 "block_size": 512, 00:49:32.613 "num_blocks": 2097152, 00:49:32.613 "uuid": "d5f4a083-5f01-4c7d-9932-7a408e8aa9f8", 00:49:32.613 "numa_id": 0, 00:49:32.613 "assigned_rate_limits": { 00:49:32.613 "rw_ios_per_sec": 0, 00:49:32.613 "rw_mbytes_per_sec": 0, 00:49:32.613 "r_mbytes_per_sec": 0, 00:49:32.613 "w_mbytes_per_sec": 0 00:49:32.613 }, 00:49:32.613 "claimed": false, 00:49:32.613 "zoned": false, 00:49:32.613 "supported_io_types": { 00:49:32.613 "read": true, 00:49:32.613 "write": true, 00:49:32.613 "unmap": false, 00:49:32.613 "flush": true, 00:49:32.613 "reset": true, 00:49:32.613 "nvme_admin": true, 00:49:32.613 "nvme_io": true, 00:49:32.613 "nvme_io_md": false, 00:49:32.613 "write_zeroes": true, 00:49:32.613 "zcopy": false, 00:49:32.613 "get_zone_info": false, 00:49:32.613 "zone_management": false, 00:49:32.613 "zone_append": false, 00:49:32.613 "compare": true, 00:49:32.613 "compare_and_write": true, 00:49:32.613 "abort": true, 00:49:32.613 "seek_hole": false, 00:49:32.613 "seek_data": false, 00:49:32.613 "copy": true, 00:49:32.613 "nvme_iov_md": false 00:49:32.613 }, 00:49:32.613 "memory_domains": [ 00:49:32.613 { 00:49:32.613 "dma_device_id": "system", 00:49:32.613 "dma_device_type": 1 00:49:32.613 } 00:49:32.613 ], 00:49:32.613 "driver_specific": { 00:49:32.613 "nvme": [ 00:49:32.613 { 00:49:32.613 "trid": { 00:49:32.613 "trtype": "TCP", 00:49:32.613 "adrfam": "IPv4", 00:49:32.613 "traddr": "10.0.0.2", 00:49:32.613 "trsvcid": "4421", 00:49:32.613 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:49:32.613 }, 00:49:32.613 "ctrlr_data": { 00:49:32.613 "cntlid": 3, 00:49:32.613 "vendor_id": "0x8086", 00:49:32.613 "model_number": "SPDK bdev Controller", 00:49:32.613 "serial_number": "00000000000000000000", 00:49:32.613 "firmware_revision": "25.01", 00:49:32.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:32.613 "oacs": { 00:49:32.613 "security": 0, 00:49:32.613 "format": 0, 00:49:32.613 "firmware": 0, 00:49:32.613 "ns_manage": 0 00:49:32.613 }, 00:49:32.613 "multi_ctrlr": true, 00:49:32.613 "ana_reporting": false 00:49:32.613 }, 00:49:32.613 "vs": { 00:49:32.613 "nvme_version": "1.3" 00:49:32.613 }, 00:49:32.613 "ns_data": { 00:49:32.613 "id": 1, 00:49:32.613 "can_share": true 00:49:32.613 } 00:49:32.613 } 00:49:32.613 ], 00:49:32.613 "mp_policy": "active_passive" 00:49:32.613 } 00:49:32.613 } 00:49:32.613 ] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.g9cl3peHpm 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:32.613 rmmod nvme_tcp 00:49:32.613 rmmod nvme_fabrics 00:49:32.613 rmmod nvme_keyring 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:49:32.613 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 708736 ']' 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 708736 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 708736 ']' 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 708736 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708736 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708736' 00:49:32.871 killing process with pid 708736 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 708736 00:49:32.871 05:44:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 708736 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:33.129 05:44:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:35.034 00:49:35.034 real 0m5.720s 00:49:35.034 user 0m2.224s 00:49:35.034 sys 0m1.928s 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:49:35.034 ************************************ 00:49:35.034 END TEST nvmf_async_init 00:49:35.034 ************************************ 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.034 ************************************ 00:49:35.034 START TEST dma 00:49:35.034 ************************************ 00:49:35.034 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:49:35.293 * Looking for test storage... 00:49:35.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.293 --rc genhtml_branch_coverage=1 00:49:35.293 --rc genhtml_function_coverage=1 00:49:35.293 --rc genhtml_legend=1 00:49:35.293 --rc geninfo_all_blocks=1 00:49:35.293 --rc geninfo_unexecuted_blocks=1 00:49:35.293 00:49:35.293 ' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.293 --rc genhtml_branch_coverage=1 00:49:35.293 --rc genhtml_function_coverage=1 00:49:35.293 --rc genhtml_legend=1 00:49:35.293 --rc geninfo_all_blocks=1 00:49:35.293 --rc geninfo_unexecuted_blocks=1 00:49:35.293 00:49:35.293 ' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.293 --rc genhtml_branch_coverage=1 00:49:35.293 --rc genhtml_function_coverage=1 00:49:35.293 --rc genhtml_legend=1 00:49:35.293 --rc geninfo_all_blocks=1 00:49:35.293 --rc geninfo_unexecuted_blocks=1 00:49:35.293 00:49:35.293 ' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:35.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.293 --rc genhtml_branch_coverage=1 00:49:35.293 --rc genhtml_function_coverage=1 00:49:35.293 --rc genhtml_legend=1 00:49:35.293 --rc geninfo_all_blocks=1 00:49:35.293 --rc geninfo_unexecuted_blocks=1 00:49:35.293 00:49:35.293 ' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:35.293 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:35.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:49:35.294 00:49:35.294 real 0m0.158s 00:49:35.294 user 0m0.103s 00:49:35.294 sys 0m0.063s 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:49:35.294 ************************************ 00:49:35.294 END TEST dma 00:49:35.294 ************************************ 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:35.294 ************************************ 00:49:35.294 START TEST nvmf_identify 00:49:35.294 ************************************ 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:49:35.294 * Looking for test storage... 00:49:35.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:49:35.294 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:35.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.553 --rc genhtml_branch_coverage=1 00:49:35.553 --rc genhtml_function_coverage=1 00:49:35.553 --rc genhtml_legend=1 00:49:35.553 --rc geninfo_all_blocks=1 00:49:35.553 --rc geninfo_unexecuted_blocks=1 00:49:35.553 00:49:35.553 ' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:35.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.553 --rc genhtml_branch_coverage=1 00:49:35.553 --rc genhtml_function_coverage=1 00:49:35.553 --rc genhtml_legend=1 00:49:35.553 --rc geninfo_all_blocks=1 00:49:35.553 --rc geninfo_unexecuted_blocks=1 00:49:35.553 00:49:35.553 ' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:35.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.553 --rc genhtml_branch_coverage=1 00:49:35.553 --rc genhtml_function_coverage=1 00:49:35.553 --rc genhtml_legend=1 00:49:35.553 --rc geninfo_all_blocks=1 00:49:35.553 --rc geninfo_unexecuted_blocks=1 00:49:35.553 00:49:35.553 ' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:35.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:35.553 --rc genhtml_branch_coverage=1 00:49:35.553 --rc genhtml_function_coverage=1 00:49:35.553 --rc genhtml_legend=1 00:49:35.553 --rc geninfo_all_blocks=1 00:49:35.553 --rc geninfo_unexecuted_blocks=1 00:49:35.553 00:49:35.553 ' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:35.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:35.553 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:49:35.554 05:44:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:38.087 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:38.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:38.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:38.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:38.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:38.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:38.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:49:38.088 00:49:38.088 --- 10.0.0.2 ping statistics --- 00:49:38.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.088 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:38.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:38.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:49:38.088 00:49:38.088 --- 10.0.0.1 ping statistics --- 00:49:38.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:38.088 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:38.088 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=710952 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 710952 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 710952 ']' 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:38.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:38.089 05:44:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.089 [2024-12-09 05:44:31.993795] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:38.089 [2024-12-09 05:44:31.993891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:38.089 [2024-12-09 05:44:32.066605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:38.089 [2024-12-09 05:44:32.124736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:38.089 [2024-12-09 05:44:32.124786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:38.089 [2024-12-09 05:44:32.124826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:38.089 [2024-12-09 05:44:32.124837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:38.089 [2024-12-09 05:44:32.124847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:38.089 [2024-12-09 05:44:32.126318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:38.089 [2024-12-09 05:44:32.126380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:38.089 [2024-12-09 05:44:32.126405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:38.089 [2024-12-09 05:44:32.126409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.089 [2024-12-09 05:44:32.249878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.089 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.348 Malloc0 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.348 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.348 [2024-12-09 05:44:32.341989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.349 [ 00:49:38.349 { 00:49:38.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:49:38.349 "subtype": "Discovery", 00:49:38.349 "listen_addresses": [ 00:49:38.349 { 00:49:38.349 "trtype": "TCP", 00:49:38.349 "adrfam": "IPv4", 00:49:38.349 "traddr": "10.0.0.2", 00:49:38.349 "trsvcid": "4420" 00:49:38.349 } 00:49:38.349 ], 00:49:38.349 "allow_any_host": true, 00:49:38.349 "hosts": [] 00:49:38.349 }, 00:49:38.349 { 00:49:38.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:49:38.349 "subtype": "NVMe", 00:49:38.349 "listen_addresses": [ 00:49:38.349 { 00:49:38.349 "trtype": "TCP", 00:49:38.349 "adrfam": "IPv4", 00:49:38.349 "traddr": "10.0.0.2", 00:49:38.349 "trsvcid": "4420" 00:49:38.349 } 00:49:38.349 ], 00:49:38.349 "allow_any_host": true, 00:49:38.349 "hosts": [], 00:49:38.349 "serial_number": "SPDK00000000000001", 00:49:38.349 "model_number": "SPDK bdev Controller", 00:49:38.349 "max_namespaces": 32, 00:49:38.349 "min_cntlid": 1, 00:49:38.349 "max_cntlid": 65519, 00:49:38.349 "namespaces": [ 00:49:38.349 { 00:49:38.349 "nsid": 1, 00:49:38.349 "bdev_name": "Malloc0", 00:49:38.349 "name": "Malloc0", 00:49:38.349 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:49:38.349 "eui64": "ABCDEF0123456789", 00:49:38.349 "uuid": "f75b3134-7bb0-4611-832a-9b726f4e1665" 00:49:38.349 } 00:49:38.349 ] 00:49:38.349 } 00:49:38.349 ] 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.349 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:49:38.349 [2024-12-09 05:44:32.385837] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:38.349 [2024-12-09 05:44:32.385883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710981 ] 00:49:38.349 [2024-12-09 05:44:32.435263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:49:38.349 [2024-12-09 05:44:32.435329] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:49:38.349 [2024-12-09 05:44:32.435340] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:49:38.349 [2024-12-09 05:44:32.435362] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:49:38.349 [2024-12-09 05:44:32.435377] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:49:38.349 [2024-12-09 05:44:32.439687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:49:38.349 [2024-12-09 05:44:32.439749] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x500690 0 00:49:38.349 [2024-12-09 05:44:32.439881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:49:38.349 [2024-12-09 05:44:32.439898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:49:38.349 [2024-12-09 05:44:32.439907] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:49:38.349 [2024-12-09 05:44:32.439912] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:49:38.349 [2024-12-09 05:44:32.439955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.439969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.439977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.349 [2024-12-09 05:44:32.439994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:49:38.349 [2024-12-09 05:44:32.440020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.349 [2024-12-09 05:44:32.447288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.349 [2024-12-09 05:44:32.447313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.349 [2024-12-09 05:44:32.447322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.349 [2024-12-09 05:44:32.447346] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:49:38.349 [2024-12-09 05:44:32.447358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:49:38.349 [2024-12-09 05:44:32.447368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:49:38.349 [2024-12-09 05:44:32.447392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.349 [2024-12-09 05:44:32.447419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.349 [2024-12-09 05:44:32.447444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.349 [2024-12-09 05:44:32.447545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.349 [2024-12-09 05:44:32.447560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.349 [2024-12-09 05:44:32.447567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.349 [2024-12-09 05:44:32.447587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:49:38.349 [2024-12-09 05:44:32.447602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:49:38.349 [2024-12-09 05:44:32.447615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.349 [2024-12-09 05:44:32.447640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.349 [2024-12-09 05:44:32.447662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.349 [2024-12-09 05:44:32.447735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.349 [2024-12-09 05:44:32.447747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.349 [2024-12-09 05:44:32.447754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.349 [2024-12-09 05:44:32.447761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.349 [2024-12-09 05:44:32.447769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:49:38.349 [2024-12-09 05:44:32.447783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:49:38.349 [2024-12-09 05:44:32.447796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.447804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.447810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.350 [2024-12-09 05:44:32.447820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.350 [2024-12-09 05:44:32.447841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.350 [2024-12-09 05:44:32.447912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.350 [2024-12-09 05:44:32.447933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.350 [2024-12-09 05:44:32.447941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.447948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.350 [2024-12-09 05:44:32.447956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:49:38.350 [2024-12-09 05:44:32.447973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.447983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.447989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.350 [2024-12-09 05:44:32.447999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.350 [2024-12-09 05:44:32.448020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.350 [2024-12-09 05:44:32.448092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.350 [2024-12-09 05:44:32.448106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.350 [2024-12-09 05:44:32.448113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.350 [2024-12-09 05:44:32.448128] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:49:38.350 [2024-12-09 05:44:32.448137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:49:38.350 [2024-12-09 05:44:32.448150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:49:38.350 [2024-12-09 05:44:32.448260] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:49:38.350 [2024-12-09 05:44:32.448268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:49:38.350 [2024-12-09 05:44:32.448291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.350 [2024-12-09 05:44:32.448316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.350 [2024-12-09 05:44:32.448338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.350 [2024-12-09 05:44:32.448417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.350 [2024-12-09 05:44:32.448430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.350 [2024-12-09 05:44:32.448437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.350 [2024-12-09 05:44:32.448452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:49:38.350 [2024-12-09 05:44:32.448468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.350 [2024-12-09 05:44:32.448493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.350 [2024-12-09 05:44:32.448513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.350 [2024-12-09 05:44:32.448585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.350 [2024-12-09 05:44:32.448599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.350 [2024-12-09 05:44:32.448606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.350 [2024-12-09 05:44:32.448620] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:49:38.350 [2024-12-09 05:44:32.448628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:49:38.350 [2024-12-09 05:44:32.448642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:49:38.350 [2024-12-09 05:44:32.448656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:49:38.350 [2024-12-09 05:44:32.448672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.350 [2024-12-09 05:44:32.448690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.350 [2024-12-09 05:44:32.448711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.350 [2024-12-09 05:44:32.448826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.350 [2024-12-09 05:44:32.448840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.350 [2024-12-09 05:44:32.448847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448853] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x500690): datao=0, datal=4096, cccid=0 00:49:38.350 [2024-12-09 05:44:32.448861] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x562100) on tqpair(0x500690): expected_datao=0, payload_size=4096 00:49:38.350 [2024-12-09 05:44:32.448868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448886] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.350 [2024-12-09 05:44:32.448936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.350 [2024-12-09 05:44:32.448943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.448950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.350 [2024-12-09 05:44:32.448962] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:49:38.350 [2024-12-09 05:44:32.448970] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:49:38.350 [2024-12-09 05:44:32.448978] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:49:38.350 [2024-12-09 05:44:32.448986] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:49:38.350 [2024-12-09 05:44:32.448994] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:49:38.350 [2024-12-09 05:44:32.449003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:49:38.350 [2024-12-09 05:44:32.449017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:49:38.350 [2024-12-09 05:44:32.449034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.449042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.350 [2024-12-09 05:44:32.449048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:38.351 [2024-12-09 05:44:32.449080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.351 [2024-12-09 05:44:32.449162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.351 [2024-12-09 05:44:32.449175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.351 [2024-12-09 05:44:32.449182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.351 [2024-12-09 05:44:32.449199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.351 [2024-12-09 05:44:32.449233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.351 [2024-12-09 05:44:32.449264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.351 [2024-12-09 05:44:32.449306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.351 [2024-12-09 05:44:32.449336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:49:38.351 [2024-12-09 05:44:32.449355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:49:38.351 [2024-12-09 05:44:32.449368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.351 [2024-12-09 05:44:32.449408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562100, cid 0, qid 0 00:49:38.351 [2024-12-09 05:44:32.449419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562280, cid 1, qid 0 00:49:38.351 [2024-12-09 05:44:32.449427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562400, cid 2, qid 0 00:49:38.351 [2024-12-09 05:44:32.449435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.351 [2024-12-09 05:44:32.449446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562700, cid 4, qid 0 00:49:38.351 [2024-12-09 05:44:32.449541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.351 [2024-12-09 05:44:32.449554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.351 [2024-12-09 05:44:32.449561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562700) on tqpair=0x500690 00:49:38.351 [2024-12-09 05:44:32.449576] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:49:38.351 [2024-12-09 05:44:32.449585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:49:38.351 [2024-12-09 05:44:32.449602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.449622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.351 [2024-12-09 05:44:32.449642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562700, cid 4, qid 0 00:49:38.351 [2024-12-09 05:44:32.449736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.351 [2024-12-09 05:44:32.449751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.351 [2024-12-09 05:44:32.449758] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449764] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x500690): datao=0, datal=4096, cccid=4 00:49:38.351 [2024-12-09 05:44:32.449772] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x562700) on tqpair(0x500690): expected_datao=0, payload_size=4096 00:49:38.351 [2024-12-09 05:44:32.449779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449795] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.449805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.351 [2024-12-09 05:44:32.494321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.351 [2024-12-09 05:44:32.494329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562700) on tqpair=0x500690 00:49:38.351 [2024-12-09 05:44:32.494356] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:49:38.351 [2024-12-09 05:44:32.494395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.494419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.351 [2024-12-09 05:44:32.494431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x500690) 00:49:38.351 [2024-12-09 05:44:32.494453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.351 [2024-12-09 05:44:32.494482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562700, cid 4, qid 0 00:49:38.351 [2024-12-09 05:44:32.494494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562880, cid 5, qid 0 00:49:38.351 [2024-12-09 05:44:32.494625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.351 [2024-12-09 05:44:32.494640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.351 [2024-12-09 05:44:32.494652] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494659] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x500690): datao=0, datal=1024, cccid=4 00:49:38.351 [2024-12-09 05:44:32.494666] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x562700) on tqpair(0x500690): expected_datao=0, payload_size=1024 00:49:38.351 [2024-12-09 05:44:32.494674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494684] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.351 [2024-12-09 05:44:32.494700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.351 [2024-12-09 05:44:32.494709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.351 [2024-12-09 05:44:32.494716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.494722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562880) on tqpair=0x500690 00:49:38.352 [2024-12-09 05:44:32.535350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.352 [2024-12-09 05:44:32.535370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.352 [2024-12-09 05:44:32.535378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562700) on tqpair=0x500690 00:49:38.352 [2024-12-09 05:44:32.535402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x500690) 00:49:38.352 [2024-12-09 05:44:32.535424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.352 [2024-12-09 05:44:32.535453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562700, cid 4, qid 0 00:49:38.352 [2024-12-09 05:44:32.535543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.352 [2024-12-09 05:44:32.535555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.352 [2024-12-09 05:44:32.535562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535569] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x500690): datao=0, datal=3072, cccid=4 00:49:38.352 [2024-12-09 05:44:32.535576] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x562700) on tqpair(0x500690): expected_datao=0, payload_size=3072 00:49:38.352 [2024-12-09 05:44:32.535583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535600] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535609] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.352 [2024-12-09 05:44:32.535631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.352 [2024-12-09 05:44:32.535638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562700) on tqpair=0x500690 00:49:38.352 [2024-12-09 05:44:32.535660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x500690) 00:49:38.352 [2024-12-09 05:44:32.535679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.352 [2024-12-09 05:44:32.535707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562700, cid 4, qid 0 00:49:38.352 [2024-12-09 05:44:32.535808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.352 [2024-12-09 05:44:32.535822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.352 [2024-12-09 05:44:32.535829] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x500690): datao=0, datal=8, cccid=4 00:49:38.352 [2024-12-09 05:44:32.535849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x562700) on tqpair(0x500690): expected_datao=0, payload_size=8 00:49:38.352 [2024-12-09 05:44:32.535856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535866] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.352 [2024-12-09 05:44:32.535873] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.612 [2024-12-09 05:44:32.576352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.612 [2024-12-09 05:44:32.576371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.612 [2024-12-09 05:44:32.576379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.612 [2024-12-09 05:44:32.576386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562700) on tqpair=0x500690 00:49:38.612 ===================================================== 00:49:38.612 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:49:38.612 ===================================================== 00:49:38.612 Controller Capabilities/Features 00:49:38.612 ================================ 00:49:38.612 Vendor ID: 0000 00:49:38.612 Subsystem Vendor ID: 0000 00:49:38.612 Serial Number: .................... 00:49:38.612 Model Number: ........................................ 00:49:38.612 Firmware Version: 25.01 00:49:38.612 Recommended Arb Burst: 0 00:49:38.612 IEEE OUI Identifier: 00 00 00 00:49:38.612 Multi-path I/O 00:49:38.612 May have multiple subsystem ports: No 00:49:38.612 May have multiple controllers: No 00:49:38.612 Associated with SR-IOV VF: No 00:49:38.612 Max Data Transfer Size: 131072 00:49:38.612 Max Number of Namespaces: 0 00:49:38.612 Max Number of I/O Queues: 1024 00:49:38.612 NVMe Specification Version (VS): 1.3 00:49:38.612 NVMe Specification Version (Identify): 1.3 00:49:38.612 Maximum Queue Entries: 128 00:49:38.612 Contiguous Queues Required: Yes 00:49:38.612 Arbitration Mechanisms Supported 00:49:38.612 Weighted Round Robin: Not Supported 00:49:38.612 Vendor Specific: Not Supported 00:49:38.612 Reset Timeout: 15000 ms 00:49:38.612 Doorbell Stride: 4 bytes 00:49:38.612 NVM Subsystem Reset: Not Supported 00:49:38.612 Command Sets Supported 00:49:38.612 NVM Command Set: Supported 00:49:38.612 Boot Partition: Not Supported 00:49:38.612 Memory Page Size Minimum: 4096 bytes 00:49:38.612 Memory Page Size Maximum: 4096 bytes 00:49:38.612 Persistent Memory Region: Not Supported 00:49:38.612 Optional Asynchronous Events Supported 00:49:38.612 Namespace Attribute Notices: Not Supported 00:49:38.612 Firmware Activation Notices: Not Supported 00:49:38.612 ANA Change Notices: Not Supported 00:49:38.612 PLE Aggregate Log Change Notices: Not Supported 00:49:38.612 LBA Status Info Alert Notices: Not Supported 00:49:38.612 EGE Aggregate Log Change Notices: Not Supported 00:49:38.612 Normal NVM Subsystem Shutdown event: Not Supported 00:49:38.612 Zone Descriptor Change Notices: Not Supported 00:49:38.612 Discovery Log Change Notices: Supported 00:49:38.612 Controller Attributes 00:49:38.612 128-bit Host Identifier: Not Supported 00:49:38.612 Non-Operational Permissive Mode: Not Supported 00:49:38.612 NVM Sets: Not Supported 00:49:38.612 Read Recovery Levels: Not Supported 00:49:38.612 Endurance Groups: Not Supported 00:49:38.612 Predictable Latency Mode: Not Supported 00:49:38.612 Traffic Based Keep ALive: Not Supported 00:49:38.612 Namespace Granularity: Not Supported 00:49:38.612 SQ Associations: Not Supported 00:49:38.613 UUID List: Not Supported 00:49:38.613 Multi-Domain Subsystem: Not Supported 00:49:38.613 Fixed Capacity Management: Not Supported 00:49:38.613 Variable Capacity Management: Not Supported 00:49:38.613 Delete Endurance Group: Not Supported 00:49:38.613 Delete NVM Set: Not Supported 00:49:38.613 Extended LBA Formats Supported: Not Supported 00:49:38.613 Flexible Data Placement Supported: Not Supported 00:49:38.613 00:49:38.613 Controller Memory Buffer Support 00:49:38.613 ================================ 00:49:38.613 Supported: No 00:49:38.613 00:49:38.613 Persistent Memory Region Support 00:49:38.613 ================================ 00:49:38.613 Supported: No 00:49:38.613 00:49:38.613 Admin Command Set Attributes 00:49:38.613 ============================ 00:49:38.613 Security Send/Receive: Not Supported 00:49:38.613 Format NVM: Not Supported 00:49:38.613 Firmware Activate/Download: Not Supported 00:49:38.613 Namespace Management: Not Supported 00:49:38.613 Device Self-Test: Not Supported 00:49:38.613 Directives: Not Supported 00:49:38.613 NVMe-MI: Not Supported 00:49:38.613 Virtualization Management: Not Supported 00:49:38.613 Doorbell Buffer Config: Not Supported 00:49:38.613 Get LBA Status Capability: Not Supported 00:49:38.613 Command & Feature Lockdown Capability: Not Supported 00:49:38.613 Abort Command Limit: 1 00:49:38.613 Async Event Request Limit: 4 00:49:38.613 Number of Firmware Slots: N/A 00:49:38.613 Firmware Slot 1 Read-Only: N/A 00:49:38.613 Firmware Activation Without Reset: N/A 00:49:38.613 Multiple Update Detection Support: N/A 00:49:38.613 Firmware Update Granularity: No Information Provided 00:49:38.613 Per-Namespace SMART Log: No 00:49:38.613 Asymmetric Namespace Access Log Page: Not Supported 00:49:38.613 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:49:38.613 Command Effects Log Page: Not Supported 00:49:38.613 Get Log Page Extended Data: Supported 00:49:38.613 Telemetry Log Pages: Not Supported 00:49:38.613 Persistent Event Log Pages: Not Supported 00:49:38.613 Supported Log Pages Log Page: May Support 00:49:38.613 Commands Supported & Effects Log Page: Not Supported 00:49:38.613 Feature Identifiers & Effects Log Page:May Support 00:49:38.613 NVMe-MI Commands & Effects Log Page: May Support 00:49:38.613 Data Area 4 for Telemetry Log: Not Supported 00:49:38.613 Error Log Page Entries Supported: 128 00:49:38.613 Keep Alive: Not Supported 00:49:38.613 00:49:38.613 NVM Command Set Attributes 00:49:38.613 ========================== 00:49:38.613 Submission Queue Entry Size 00:49:38.613 Max: 1 00:49:38.613 Min: 1 00:49:38.613 Completion Queue Entry Size 00:49:38.613 Max: 1 00:49:38.613 Min: 1 00:49:38.613 Number of Namespaces: 0 00:49:38.613 Compare Command: Not Supported 00:49:38.613 Write Uncorrectable Command: Not Supported 00:49:38.613 Dataset Management Command: Not Supported 00:49:38.613 Write Zeroes Command: Not Supported 00:49:38.613 Set Features Save Field: Not Supported 00:49:38.613 Reservations: Not Supported 00:49:38.613 Timestamp: Not Supported 00:49:38.613 Copy: Not Supported 00:49:38.613 Volatile Write Cache: Not Present 00:49:38.613 Atomic Write Unit (Normal): 1 00:49:38.613 Atomic Write Unit (PFail): 1 00:49:38.613 Atomic Compare & Write Unit: 1 00:49:38.613 Fused Compare & Write: Supported 00:49:38.613 Scatter-Gather List 00:49:38.613 SGL Command Set: Supported 00:49:38.613 SGL Keyed: Supported 00:49:38.613 SGL Bit Bucket Descriptor: Not Supported 00:49:38.613 SGL Metadata Pointer: Not Supported 00:49:38.613 Oversized SGL: Not Supported 00:49:38.613 SGL Metadata Address: Not Supported 00:49:38.613 SGL Offset: Supported 00:49:38.613 Transport SGL Data Block: Not Supported 00:49:38.613 Replay Protected Memory Block: Not Supported 00:49:38.613 00:49:38.613 Firmware Slot Information 00:49:38.613 ========================= 00:49:38.613 Active slot: 0 00:49:38.613 00:49:38.613 00:49:38.613 Error Log 00:49:38.613 ========= 00:49:38.613 00:49:38.613 Active Namespaces 00:49:38.613 ================= 00:49:38.613 Discovery Log Page 00:49:38.613 ================== 00:49:38.613 Generation Counter: 2 00:49:38.613 Number of Records: 2 00:49:38.613 Record Format: 0 00:49:38.613 00:49:38.613 Discovery Log Entry 0 00:49:38.613 ---------------------- 00:49:38.613 Transport Type: 3 (TCP) 00:49:38.613 Address Family: 1 (IPv4) 00:49:38.613 Subsystem Type: 3 (Current Discovery Subsystem) 00:49:38.613 Entry Flags: 00:49:38.613 Duplicate Returned Information: 1 00:49:38.613 Explicit Persistent Connection Support for Discovery: 1 00:49:38.613 Transport Requirements: 00:49:38.613 Secure Channel: Not Required 00:49:38.613 Port ID: 0 (0x0000) 00:49:38.613 Controller ID: 65535 (0xffff) 00:49:38.613 Admin Max SQ Size: 128 00:49:38.613 Transport Service Identifier: 4420 00:49:38.613 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:49:38.613 Transport Address: 10.0.0.2 00:49:38.613 Discovery Log Entry 1 00:49:38.613 ---------------------- 00:49:38.613 Transport Type: 3 (TCP) 00:49:38.613 Address Family: 1 (IPv4) 00:49:38.613 Subsystem Type: 2 (NVM Subsystem) 00:49:38.613 Entry Flags: 00:49:38.613 Duplicate Returned Information: 0 00:49:38.613 Explicit Persistent Connection Support for Discovery: 0 00:49:38.613 Transport Requirements: 00:49:38.613 Secure Channel: Not Required 00:49:38.613 Port ID: 0 (0x0000) 00:49:38.613 Controller ID: 65535 (0xffff) 00:49:38.613 Admin Max SQ Size: 128 00:49:38.613 Transport Service Identifier: 4420 00:49:38.613 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:49:38.613 Transport Address: 10.0.0.2 [2024-12-09 05:44:32.576503] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:49:38.613 [2024-12-09 05:44:32.576525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562100) on tqpair=0x500690 00:49:38.613 [2024-12-09 05:44:32.576536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.614 [2024-12-09 05:44:32.576545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562280) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.576553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.614 [2024-12-09 05:44:32.576561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562400) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.576569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.614 [2024-12-09 05:44:32.576577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.576584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.614 [2024-12-09 05:44:32.576597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.576623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.576662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.576755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.576768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.576775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.576793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.576817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.576844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.576931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.576944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.576951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.576961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.576970] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:49:38.614 [2024-12-09 05:44:32.576978] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:49:38.614 [2024-12-09 05:44:32.576994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.577157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.577334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.577501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.577669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.614 [2024-12-09 05:44:32.577828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.614 [2024-12-09 05:44:32.577854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.614 [2024-12-09 05:44:32.577874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.614 [2024-12-09 05:44:32.577964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.614 [2024-12-09 05:44:32.577976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.614 [2024-12-09 05:44:32.577983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.614 [2024-12-09 05:44:32.577990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.615 [2024-12-09 05:44:32.578006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.578015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.578021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.615 [2024-12-09 05:44:32.578031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.615 [2024-12-09 05:44:32.578051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.615 [2024-12-09 05:44:32.578141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.578153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.578160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.578167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.615 [2024-12-09 05:44:32.578183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.578192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.578198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.615 [2024-12-09 05:44:32.578208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.615 [2024-12-09 05:44:32.578228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.615 [2024-12-09 05:44:32.582288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.582306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.582317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.582325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.615 [2024-12-09 05:44:32.582343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.582353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.582360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x500690) 00:49:38.615 [2024-12-09 05:44:32.582370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.615 [2024-12-09 05:44:32.582392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x562580, cid 3, qid 0 00:49:38.615 [2024-12-09 05:44:32.582482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.582494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.582502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.582508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x562580) on tqpair=0x500690 00:49:38.615 [2024-12-09 05:44:32.582521] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:49:38.615 00:49:38.615 05:44:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:49:38.615 [2024-12-09 05:44:32.698830] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:38.615 [2024-12-09 05:44:32.698867] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711046 ] 00:49:38.615 [2024-12-09 05:44:32.747903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:49:38.615 [2024-12-09 05:44:32.747955] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:49:38.615 [2024-12-09 05:44:32.747965] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:49:38.615 [2024-12-09 05:44:32.747989] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:49:38.615 [2024-12-09 05:44:32.748001] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:49:38.615 [2024-12-09 05:44:32.751573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:49:38.615 [2024-12-09 05:44:32.751629] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9a1690 0 00:49:38.615 [2024-12-09 05:44:32.759282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:49:38.615 [2024-12-09 05:44:32.759303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:49:38.615 [2024-12-09 05:44:32.759311] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:49:38.615 [2024-12-09 05:44:32.759317] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:49:38.615 [2024-12-09 05:44:32.759359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.759372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.759379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.615 [2024-12-09 05:44:32.759393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:49:38.615 [2024-12-09 05:44:32.759420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.615 [2024-12-09 05:44:32.767287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.767305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.767323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.615 [2024-12-09 05:44:32.767351] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:49:38.615 [2024-12-09 05:44:32.767363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:49:38.615 [2024-12-09 05:44:32.767373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:49:38.615 [2024-12-09 05:44:32.767392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.615 [2024-12-09 05:44:32.767419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.615 [2024-12-09 05:44:32.767443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.615 [2024-12-09 05:44:32.767561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.767574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.767581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.615 [2024-12-09 05:44:32.767601] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:49:38.615 [2024-12-09 05:44:32.767616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:49:38.615 [2024-12-09 05:44:32.767629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.615 [2024-12-09 05:44:32.767643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.615 [2024-12-09 05:44:32.767653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.615 [2024-12-09 05:44:32.767675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.615 [2024-12-09 05:44:32.767750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.615 [2024-12-09 05:44:32.767764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.615 [2024-12-09 05:44:32.767771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.767778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.767787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:49:38.616 [2024-12-09 05:44:32.767801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.767814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.767822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.767828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.616 [2024-12-09 05:44:32.767838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.616 [2024-12-09 05:44:32.767859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.616 [2024-12-09 05:44:32.767931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.616 [2024-12-09 05:44:32.767944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.616 [2024-12-09 05:44:32.767952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.767958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.767967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.767984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.767993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.616 [2024-12-09 05:44:32.768010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.616 [2024-12-09 05:44:32.768030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.616 [2024-12-09 05:44:32.768103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.616 [2024-12-09 05:44:32.768117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.616 [2024-12-09 05:44:32.768124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.768139] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:49:38.616 [2024-12-09 05:44:32.768147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.768160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.768278] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:49:38.616 [2024-12-09 05:44:32.768289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.768301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.616 [2024-12-09 05:44:32.768326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.616 [2024-12-09 05:44:32.768347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.616 [2024-12-09 05:44:32.768449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.616 [2024-12-09 05:44:32.768462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.616 [2024-12-09 05:44:32.768469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.768484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:49:38.616 [2024-12-09 05:44:32.768501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.616 [2024-12-09 05:44:32.768527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.616 [2024-12-09 05:44:32.768547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.616 [2024-12-09 05:44:32.768625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.616 [2024-12-09 05:44:32.768639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.616 [2024-12-09 05:44:32.768647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.768661] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:49:38.616 [2024-12-09 05:44:32.768670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:49:38.616 [2024-12-09 05:44:32.768684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:49:38.616 [2024-12-09 05:44:32.768698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:49:38.616 [2024-12-09 05:44:32.768712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.616 [2024-12-09 05:44:32.768731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.616 [2024-12-09 05:44:32.768752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.616 [2024-12-09 05:44:32.768852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.616 [2024-12-09 05:44:32.768865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.616 [2024-12-09 05:44:32.768872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768879] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=4096, cccid=0 00:49:38.616 [2024-12-09 05:44:32.768886] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03100) on tqpair(0x9a1690): expected_datao=0, payload_size=4096 00:49:38.616 [2024-12-09 05:44:32.768893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768910] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768919] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.616 [2024-12-09 05:44:32.768940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.616 [2024-12-09 05:44:32.768947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.768953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.616 [2024-12-09 05:44:32.768964] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:49:38.616 [2024-12-09 05:44:32.768973] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:49:38.616 [2024-12-09 05:44:32.768981] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:49:38.616 [2024-12-09 05:44:32.768987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:49:38.616 [2024-12-09 05:44:32.768995] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:49:38.616 [2024-12-09 05:44:32.769003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:49:38.616 [2024-12-09 05:44:32.769017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:49:38.616 [2024-12-09 05:44:32.769028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.616 [2024-12-09 05:44:32.769042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:38.617 [2024-12-09 05:44:32.769082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.617 [2024-12-09 05:44:32.769178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.617 [2024-12-09 05:44:32.769192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.617 [2024-12-09 05:44:32.769199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.617 [2024-12-09 05:44:32.769216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.617 [2024-12-09 05:44:32.769250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.617 [2024-12-09 05:44:32.769292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.617 [2024-12-09 05:44:32.769323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.617 [2024-12-09 05:44:32.769353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.769372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.769386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.617 [2024-12-09 05:44:32.769425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03100, cid 0, qid 0 00:49:38.617 [2024-12-09 05:44:32.769437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03280, cid 1, qid 0 00:49:38.617 [2024-12-09 05:44:32.769445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03400, cid 2, qid 0 00:49:38.617 [2024-12-09 05:44:32.769453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.617 [2024-12-09 05:44:32.769461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.617 [2024-12-09 05:44:32.769603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.617 [2024-12-09 05:44:32.769616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.617 [2024-12-09 05:44:32.769624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.617 [2024-12-09 05:44:32.769639] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:49:38.617 [2024-12-09 05:44:32.769647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.769665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.769677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.769688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.769711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:49:38.617 [2024-12-09 05:44:32.769732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.617 [2024-12-09 05:44:32.769922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.617 [2024-12-09 05:44:32.769937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.617 [2024-12-09 05:44:32.769944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.769951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.617 [2024-12-09 05:44:32.770020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.770040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.770055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.770063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.770073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.617 [2024-12-09 05:44:32.770095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.617 [2024-12-09 05:44:32.770189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.617 [2024-12-09 05:44:32.770202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.617 [2024-12-09 05:44:32.770209] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.770215] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=4096, cccid=4 00:49:38.617 [2024-12-09 05:44:32.770223] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03700) on tqpair(0x9a1690): expected_datao=0, payload_size=4096 00:49:38.617 [2024-12-09 05:44:32.770230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.770246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.770255] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.810369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.617 [2024-12-09 05:44:32.810389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.617 [2024-12-09 05:44:32.810397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.810408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.617 [2024-12-09 05:44:32.810429] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:49:38.617 [2024-12-09 05:44:32.810450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.810468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:49:38.617 [2024-12-09 05:44:32.810483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.617 [2024-12-09 05:44:32.810491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.617 [2024-12-09 05:44:32.810502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.617 [2024-12-09 05:44:32.810525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.617 [2024-12-09 05:44:32.810645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.618 [2024-12-09 05:44:32.810658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.618 [2024-12-09 05:44:32.810665] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.618 [2024-12-09 05:44:32.810672] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=4096, cccid=4 00:49:38.618 [2024-12-09 05:44:32.810679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03700) on tqpair(0x9a1690): expected_datao=0, payload_size=4096 00:49:38.618 [2024-12-09 05:44:32.810686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.618 [2024-12-09 05:44:32.810702] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.618 [2024-12-09 05:44:32.810712] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.877 [2024-12-09 05:44:32.855289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.855308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.855317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.855324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.855340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.855359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.855374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.855382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.855394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.855417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.878 [2024-12-09 05:44:32.855544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.878 [2024-12-09 05:44:32.855557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.878 [2024-12-09 05:44:32.855565] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.855571] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=4096, cccid=4 00:49:38.878 [2024-12-09 05:44:32.855579] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03700) on tqpair(0x9a1690): expected_datao=0, payload_size=4096 00:49:38.878 [2024-12-09 05:44:32.855586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.855603] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.855616] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.900308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.900317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.900343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900415] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:49:38.878 [2024-12-09 05:44:32.900423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:49:38.878 [2024-12-09 05:44:32.900432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:49:38.878 [2024-12-09 05:44:32.900451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.900471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.900483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.900505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:49:38.878 [2024-12-09 05:44:32.900532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.878 [2024-12-09 05:44:32.900544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03880, cid 5, qid 0 00:49:38.878 [2024-12-09 05:44:32.900631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.900643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.900650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.900668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.900678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.900685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03880) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.900707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.900730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.900752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03880, cid 5, qid 0 00:49:38.878 [2024-12-09 05:44:32.900838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.900866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.900876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03880) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.900900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.900910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.900920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.900942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03880, cid 5, qid 0 00:49:38.878 [2024-12-09 05:44:32.901026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.901040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.901048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.901054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03880) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.901070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.901079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.901089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.901110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03880, cid 5, qid 0 00:49:38.878 [2024-12-09 05:44:32.901181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.878 [2024-12-09 05:44:32.901195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.878 [2024-12-09 05:44:32.901202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.901209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03880) on tqpair=0x9a1690 00:49:38.878 [2024-12-09 05:44:32.901234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.901245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a1690) 00:49:38.878 [2024-12-09 05:44:32.901256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.878 [2024-12-09 05:44:32.901268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.878 [2024-12-09 05:44:32.901288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a1690) 00:49:38.879 [2024-12-09 05:44:32.901298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.879 [2024-12-09 05:44:32.901310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9a1690) 00:49:38.879 [2024-12-09 05:44:32.901327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.879 [2024-12-09 05:44:32.901339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9a1690) 00:49:38.879 [2024-12-09 05:44:32.901360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.879 [2024-12-09 05:44:32.901383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03880, cid 5, qid 0 00:49:38.879 [2024-12-09 05:44:32.901394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03700, cid 4, qid 0 00:49:38.879 [2024-12-09 05:44:32.901403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03a00, cid 6, qid 0 00:49:38.879 [2024-12-09 05:44:32.901410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03b80, cid 7, qid 0 00:49:38.879 [2024-12-09 05:44:32.901572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.879 [2024-12-09 05:44:32.901585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.879 [2024-12-09 05:44:32.901592] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901598] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=8192, cccid=5 00:49:38.879 [2024-12-09 05:44:32.901606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03880) on tqpair(0x9a1690): expected_datao=0, payload_size=8192 00:49:38.879 [2024-12-09 05:44:32.901613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901637] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901647] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.879 [2024-12-09 05:44:32.901665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.879 [2024-12-09 05:44:32.901672] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901678] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=512, cccid=4 00:49:38.879 [2024-12-09 05:44:32.901685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03700) on tqpair(0x9a1690): expected_datao=0, payload_size=512 00:49:38.879 [2024-12-09 05:44:32.901692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901701] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901709] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.879 [2024-12-09 05:44:32.901726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.879 [2024-12-09 05:44:32.901732] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=512, cccid=6 00:49:38.879 [2024-12-09 05:44:32.901746] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03a00) on tqpair(0x9a1690): expected_datao=0, payload_size=512 00:49:38.879 [2024-12-09 05:44:32.901753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901769] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:49:38.879 [2024-12-09 05:44:32.901786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:49:38.879 [2024-12-09 05:44:32.901792] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901798] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a1690): datao=0, datal=4096, cccid=7 00:49:38.879 [2024-12-09 05:44:32.901806] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03b80) on tqpair(0x9a1690): expected_datao=0, payload_size=4096 00:49:38.879 [2024-12-09 05:44:32.901813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901822] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901829] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.879 [2024-12-09 05:44:32.901857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.879 [2024-12-09 05:44:32.901864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03880) on tqpair=0x9a1690 00:49:38.879 [2024-12-09 05:44:32.901890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.879 [2024-12-09 05:44:32.901916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.879 [2024-12-09 05:44:32.901924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03700) on tqpair=0x9a1690 00:49:38.879 [2024-12-09 05:44:32.901946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.879 [2024-12-09 05:44:32.901956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.879 [2024-12-09 05:44:32.901963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.901969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03a00) on tqpair=0x9a1690 00:49:38.879 [2024-12-09 05:44:32.901980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.879 [2024-12-09 05:44:32.901989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.879 [2024-12-09 05:44:32.901996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.879 [2024-12-09 05:44:32.902002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03b80) on tqpair=0x9a1690 00:49:38.879 ===================================================== 00:49:38.879 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:38.879 ===================================================== 00:49:38.879 Controller Capabilities/Features 00:49:38.879 ================================ 00:49:38.879 Vendor ID: 8086 00:49:38.879 Subsystem Vendor ID: 8086 00:49:38.879 Serial Number: SPDK00000000000001 00:49:38.879 Model Number: SPDK bdev Controller 00:49:38.879 Firmware Version: 25.01 00:49:38.879 Recommended Arb Burst: 6 00:49:38.879 IEEE OUI Identifier: e4 d2 5c 00:49:38.879 Multi-path I/O 00:49:38.879 May have multiple subsystem ports: Yes 00:49:38.879 May have multiple controllers: Yes 00:49:38.879 Associated with SR-IOV VF: No 00:49:38.879 Max Data Transfer Size: 131072 00:49:38.879 Max Number of Namespaces: 32 00:49:38.879 Max Number of I/O Queues: 127 00:49:38.879 NVMe Specification Version (VS): 1.3 00:49:38.879 NVMe Specification Version (Identify): 1.3 00:49:38.879 Maximum Queue Entries: 128 00:49:38.879 Contiguous Queues Required: Yes 00:49:38.879 Arbitration Mechanisms Supported 00:49:38.879 Weighted Round Robin: Not Supported 00:49:38.879 Vendor Specific: Not Supported 00:49:38.879 Reset Timeout: 15000 ms 00:49:38.879 Doorbell Stride: 4 bytes 00:49:38.879 NVM Subsystem Reset: Not Supported 00:49:38.879 Command Sets Supported 00:49:38.879 NVM Command Set: Supported 00:49:38.879 Boot Partition: Not Supported 00:49:38.879 Memory Page Size Minimum: 4096 bytes 00:49:38.879 Memory Page Size Maximum: 4096 bytes 00:49:38.879 Persistent Memory Region: Not Supported 00:49:38.880 Optional Asynchronous Events Supported 00:49:38.880 Namespace Attribute Notices: Supported 00:49:38.880 Firmware Activation Notices: Not Supported 00:49:38.880 ANA Change Notices: Not Supported 00:49:38.880 PLE Aggregate Log Change Notices: Not Supported 00:49:38.880 LBA Status Info Alert Notices: Not Supported 00:49:38.880 EGE Aggregate Log Change Notices: Not Supported 00:49:38.880 Normal NVM Subsystem Shutdown event: Not Supported 00:49:38.880 Zone Descriptor Change Notices: Not Supported 00:49:38.880 Discovery Log Change Notices: Not Supported 00:49:38.880 Controller Attributes 00:49:38.880 128-bit Host Identifier: Supported 00:49:38.880 Non-Operational Permissive Mode: Not Supported 00:49:38.880 NVM Sets: Not Supported 00:49:38.880 Read Recovery Levels: Not Supported 00:49:38.880 Endurance Groups: Not Supported 00:49:38.880 Predictable Latency Mode: Not Supported 00:49:38.880 Traffic Based Keep ALive: Not Supported 00:49:38.880 Namespace Granularity: Not Supported 00:49:38.880 SQ Associations: Not Supported 00:49:38.880 UUID List: Not Supported 00:49:38.880 Multi-Domain Subsystem: Not Supported 00:49:38.880 Fixed Capacity Management: Not Supported 00:49:38.880 Variable Capacity Management: Not Supported 00:49:38.880 Delete Endurance Group: Not Supported 00:49:38.880 Delete NVM Set: Not Supported 00:49:38.880 Extended LBA Formats Supported: Not Supported 00:49:38.880 Flexible Data Placement Supported: Not Supported 00:49:38.880 00:49:38.880 Controller Memory Buffer Support 00:49:38.880 ================================ 00:49:38.880 Supported: No 00:49:38.880 00:49:38.880 Persistent Memory Region Support 00:49:38.880 ================================ 00:49:38.880 Supported: No 00:49:38.880 00:49:38.880 Admin Command Set Attributes 00:49:38.880 ============================ 00:49:38.880 Security Send/Receive: Not Supported 00:49:38.880 Format NVM: Not Supported 00:49:38.880 Firmware Activate/Download: Not Supported 00:49:38.880 Namespace Management: Not Supported 00:49:38.880 Device Self-Test: Not Supported 00:49:38.880 Directives: Not Supported 00:49:38.880 NVMe-MI: Not Supported 00:49:38.880 Virtualization Management: Not Supported 00:49:38.880 Doorbell Buffer Config: Not Supported 00:49:38.880 Get LBA Status Capability: Not Supported 00:49:38.880 Command & Feature Lockdown Capability: Not Supported 00:49:38.880 Abort Command Limit: 4 00:49:38.880 Async Event Request Limit: 4 00:49:38.880 Number of Firmware Slots: N/A 00:49:38.880 Firmware Slot 1 Read-Only: N/A 00:49:38.880 Firmware Activation Without Reset: N/A 00:49:38.880 Multiple Update Detection Support: N/A 00:49:38.880 Firmware Update Granularity: No Information Provided 00:49:38.880 Per-Namespace SMART Log: No 00:49:38.880 Asymmetric Namespace Access Log Page: Not Supported 00:49:38.880 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:49:38.880 Command Effects Log Page: Supported 00:49:38.880 Get Log Page Extended Data: Supported 00:49:38.880 Telemetry Log Pages: Not Supported 00:49:38.880 Persistent Event Log Pages: Not Supported 00:49:38.880 Supported Log Pages Log Page: May Support 00:49:38.880 Commands Supported & Effects Log Page: Not Supported 00:49:38.880 Feature Identifiers & Effects Log Page:May Support 00:49:38.880 NVMe-MI Commands & Effects Log Page: May Support 00:49:38.880 Data Area 4 for Telemetry Log: Not Supported 00:49:38.880 Error Log Page Entries Supported: 128 00:49:38.880 Keep Alive: Supported 00:49:38.880 Keep Alive Granularity: 10000 ms 00:49:38.880 00:49:38.880 NVM Command Set Attributes 00:49:38.880 ========================== 00:49:38.880 Submission Queue Entry Size 00:49:38.880 Max: 64 00:49:38.880 Min: 64 00:49:38.880 Completion Queue Entry Size 00:49:38.880 Max: 16 00:49:38.880 Min: 16 00:49:38.880 Number of Namespaces: 32 00:49:38.880 Compare Command: Supported 00:49:38.880 Write Uncorrectable Command: Not Supported 00:49:38.880 Dataset Management Command: Supported 00:49:38.880 Write Zeroes Command: Supported 00:49:38.880 Set Features Save Field: Not Supported 00:49:38.880 Reservations: Supported 00:49:38.880 Timestamp: Not Supported 00:49:38.880 Copy: Supported 00:49:38.880 Volatile Write Cache: Present 00:49:38.880 Atomic Write Unit (Normal): 1 00:49:38.880 Atomic Write Unit (PFail): 1 00:49:38.880 Atomic Compare & Write Unit: 1 00:49:38.880 Fused Compare & Write: Supported 00:49:38.880 Scatter-Gather List 00:49:38.880 SGL Command Set: Supported 00:49:38.880 SGL Keyed: Supported 00:49:38.880 SGL Bit Bucket Descriptor: Not Supported 00:49:38.880 SGL Metadata Pointer: Not Supported 00:49:38.880 Oversized SGL: Not Supported 00:49:38.880 SGL Metadata Address: Not Supported 00:49:38.880 SGL Offset: Supported 00:49:38.880 Transport SGL Data Block: Not Supported 00:49:38.880 Replay Protected Memory Block: Not Supported 00:49:38.880 00:49:38.880 Firmware Slot Information 00:49:38.880 ========================= 00:49:38.880 Active slot: 1 00:49:38.880 Slot 1 Firmware Revision: 25.01 00:49:38.880 00:49:38.880 00:49:38.880 Commands Supported and Effects 00:49:38.880 ============================== 00:49:38.880 Admin Commands 00:49:38.880 -------------- 00:49:38.880 Get Log Page (02h): Supported 00:49:38.880 Identify (06h): Supported 00:49:38.880 Abort (08h): Supported 00:49:38.880 Set Features (09h): Supported 00:49:38.880 Get Features (0Ah): Supported 00:49:38.880 Asynchronous Event Request (0Ch): Supported 00:49:38.880 Keep Alive (18h): Supported 00:49:38.880 I/O Commands 00:49:38.880 ------------ 00:49:38.880 Flush (00h): Supported LBA-Change 00:49:38.880 Write (01h): Supported LBA-Change 00:49:38.880 Read (02h): Supported 00:49:38.880 Compare (05h): Supported 00:49:38.880 Write Zeroes (08h): Supported LBA-Change 00:49:38.880 Dataset Management (09h): Supported LBA-Change 00:49:38.880 Copy (19h): Supported LBA-Change 00:49:38.880 00:49:38.880 Error Log 00:49:38.880 ========= 00:49:38.880 00:49:38.880 Arbitration 00:49:38.880 =========== 00:49:38.880 Arbitration Burst: 1 00:49:38.880 00:49:38.880 Power Management 00:49:38.880 ================ 00:49:38.880 Number of Power States: 1 00:49:38.880 Current Power State: Power State #0 00:49:38.880 Power State #0: 00:49:38.880 Max Power: 0.00 W 00:49:38.880 Non-Operational State: Operational 00:49:38.880 Entry Latency: Not Reported 00:49:38.880 Exit Latency: Not Reported 00:49:38.880 Relative Read Throughput: 0 00:49:38.880 Relative Read Latency: 0 00:49:38.880 Relative Write Throughput: 0 00:49:38.880 Relative Write Latency: 0 00:49:38.880 Idle Power: Not Reported 00:49:38.881 Active Power: Not Reported 00:49:38.881 Non-Operational Permissive Mode: Not Supported 00:49:38.881 00:49:38.881 Health Information 00:49:38.881 ================== 00:49:38.881 Critical Warnings: 00:49:38.881 Available Spare Space: OK 00:49:38.881 Temperature: OK 00:49:38.881 Device Reliability: OK 00:49:38.881 Read Only: No 00:49:38.881 Volatile Memory Backup: OK 00:49:38.881 Current Temperature: 0 Kelvin (-273 Celsius) 00:49:38.881 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:49:38.881 Available Spare: 0% 00:49:38.881 Available Spare Threshold: 0% 00:49:38.881 Life Percentage Used:[2024-12-09 05:44:32.902129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.902152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.902175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03b80, cid 7, qid 0 00:49:38.881 [2024-12-09 05:44:32.902265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.902289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.902297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03b80) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902351] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:49:38.881 [2024-12-09 05:44:32.902370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03100) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.881 [2024-12-09 05:44:32.902390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03280) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.881 [2024-12-09 05:44:32.902405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03400) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.881 [2024-12-09 05:44:32.902420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:38.881 [2024-12-09 05:44:32.902440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.902468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.902491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.902572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.902585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.902592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.902635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.902660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.902752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.902766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.902773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902787] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:49:38.881 [2024-12-09 05:44:32.902795] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:49:38.881 [2024-12-09 05:44:32.902811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.902836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.902856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.902931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.902944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.902951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.902974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.902990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.903001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.903021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.903088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.903101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.903108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.903130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.903163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.903183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.903257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.903279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.903288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.903312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.881 [2024-12-09 05:44:32.903338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.881 [2024-12-09 05:44:32.903359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.881 [2024-12-09 05:44:32.903430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.881 [2024-12-09 05:44:32.903443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.881 [2024-12-09 05:44:32.903450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.881 [2024-12-09 05:44:32.903472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.881 [2024-12-09 05:44:32.903482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.903498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.903518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.903588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.903601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.903608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.903630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.903656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.903676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.903747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.903761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.903768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.903791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.903821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.903842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.903909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.903922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.903929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.903952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.903967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.903977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.903997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.904067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.904079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.904086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.904109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.904135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.904154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.904225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.904238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.904245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.904268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.904293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.904303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.904324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.908285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.908303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.908311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.908318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.908351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.908361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.908368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a1690) 00:49:38.882 [2024-12-09 05:44:32.908378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:38.882 [2024-12-09 05:44:32.908405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03580, cid 3, qid 0 00:49:38.882 [2024-12-09 05:44:32.908521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:49:38.882 [2024-12-09 05:44:32.908536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:49:38.882 [2024-12-09 05:44:32.908543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:49:38.882 [2024-12-09 05:44:32.908550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa03580) on tqpair=0x9a1690 00:49:38.882 [2024-12-09 05:44:32.908563] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:49:38.882 0% 00:49:38.882 Data Units Read: 0 00:49:38.882 Data Units Written: 0 00:49:38.882 Host Read Commands: 0 00:49:38.882 Host Write Commands: 0 00:49:38.882 Controller Busy Time: 0 minutes 00:49:38.882 Power Cycles: 0 00:49:38.882 Power On Hours: 0 hours 00:49:38.882 Unsafe Shutdowns: 0 00:49:38.882 Unrecoverable Media Errors: 0 00:49:38.882 Lifetime Error Log Entries: 0 00:49:38.882 Warning Temperature Time: 0 minutes 00:49:38.882 Critical Temperature Time: 0 minutes 00:49:38.882 00:49:38.882 Number of Queues 00:49:38.882 ================ 00:49:38.882 Number of I/O Submission Queues: 127 00:49:38.882 Number of I/O Completion Queues: 127 00:49:38.882 00:49:38.882 Active Namespaces 00:49:38.882 ================= 00:49:38.882 Namespace ID:1 00:49:38.882 Error Recovery Timeout: Unlimited 00:49:38.882 Command Set Identifier: NVM (00h) 00:49:38.882 Deallocate: Supported 00:49:38.882 Deallocated/Unwritten Error: Not Supported 00:49:38.882 Deallocated Read Value: Unknown 00:49:38.882 Deallocate in Write Zeroes: Not Supported 00:49:38.882 Deallocated Guard Field: 0xFFFF 00:49:38.882 Flush: Supported 00:49:38.882 Reservation: Supported 00:49:38.882 Namespace Sharing Capabilities: Multiple Controllers 00:49:38.882 Size (in LBAs): 131072 (0GiB) 00:49:38.882 Capacity (in LBAs): 131072 (0GiB) 00:49:38.882 Utilization (in LBAs): 131072 (0GiB) 00:49:38.882 NGUID: ABCDEF0123456789ABCDEF0123456789 00:49:38.882 EUI64: ABCDEF0123456789 00:49:38.882 UUID: f75b3134-7bb0-4611-832a-9b726f4e1665 00:49:38.882 Thin Provisioning: Not Supported 00:49:38.882 Per-NS Atomic Units: Yes 00:49:38.882 Atomic Boundary Size (Normal): 0 00:49:38.882 Atomic Boundary Size (PFail): 0 00:49:38.882 Atomic Boundary Offset: 0 00:49:38.882 Maximum Single Source Range Length: 65535 00:49:38.883 Maximum Copy Length: 65535 00:49:38.883 Maximum Source Range Count: 1 00:49:38.883 NGUID/EUI64 Never Reused: No 00:49:38.883 Namespace Write Protected: No 00:49:38.883 Number of LBA Formats: 1 00:49:38.883 Current LBA Format: LBA Format #00 00:49:38.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:49:38.883 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:38.883 rmmod nvme_tcp 00:49:38.883 rmmod nvme_fabrics 00:49:38.883 rmmod nvme_keyring 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 710952 ']' 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 710952 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 710952 ']' 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 710952 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:38.883 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 710952 00:49:39.141 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:39.141 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:39.141 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 710952' 00:49:39.141 killing process with pid 710952 00:49:39.141 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 710952 00:49:39.141 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 710952 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:39.400 05:44:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:41.300 00:49:41.300 real 0m6.058s 00:49:41.300 user 0m5.832s 00:49:41.300 sys 0m2.084s 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:49:41.300 ************************************ 00:49:41.300 END TEST nvmf_identify 00:49:41.300 ************************************ 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:49:41.300 ************************************ 00:49:41.300 START TEST nvmf_perf 00:49:41.300 ************************************ 00:49:41.300 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:49:41.559 * Looking for test storage... 00:49:41.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:41.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.559 --rc genhtml_branch_coverage=1 00:49:41.559 --rc genhtml_function_coverage=1 00:49:41.559 --rc genhtml_legend=1 00:49:41.559 --rc geninfo_all_blocks=1 00:49:41.559 --rc geninfo_unexecuted_blocks=1 00:49:41.559 00:49:41.559 ' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:41.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.559 --rc genhtml_branch_coverage=1 00:49:41.559 --rc genhtml_function_coverage=1 00:49:41.559 --rc genhtml_legend=1 00:49:41.559 --rc geninfo_all_blocks=1 00:49:41.559 --rc geninfo_unexecuted_blocks=1 00:49:41.559 00:49:41.559 ' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:41.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.559 --rc genhtml_branch_coverage=1 00:49:41.559 --rc genhtml_function_coverage=1 00:49:41.559 --rc genhtml_legend=1 00:49:41.559 --rc geninfo_all_blocks=1 00:49:41.559 --rc geninfo_unexecuted_blocks=1 00:49:41.559 00:49:41.559 ' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:41.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.559 --rc genhtml_branch_coverage=1 00:49:41.559 --rc genhtml_function_coverage=1 00:49:41.559 --rc genhtml_legend=1 00:49:41.559 --rc geninfo_all_blocks=1 00:49:41.559 --rc geninfo_unexecuted_blocks=1 00:49:41.559 00:49:41.559 ' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:41.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:49:41.559 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:49:41.560 05:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:49:44.089 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:49:44.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:49:44.090 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:49:44.090 Found net devices under 0000:0a:00.0: cvl_0_0 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:49:44.090 Found net devices under 0000:0a:00.1: cvl_0_1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:44.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:44.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:49:44.090 00:49:44.090 --- 10.0.0.2 ping statistics --- 00:49:44.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:44.090 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:44.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:44.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:49:44.090 00:49:44.090 --- 10.0.0.1 ping statistics --- 00:49:44.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:44.090 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=713046 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 713046 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 713046 ']' 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:44.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:44.090 05:44:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:44.091 [2024-12-09 05:44:38.007066] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:49:44.091 [2024-12-09 05:44:38.007136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:44.091 [2024-12-09 05:44:38.078071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:44.091 [2024-12-09 05:44:38.134969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:44.091 [2024-12-09 05:44:38.135039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:44.091 [2024-12-09 05:44:38.135054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:44.091 [2024-12-09 05:44:38.135066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:44.091 [2024-12-09 05:44:38.135076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:44.091 [2024-12-09 05:44:38.136647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:44.091 [2024-12-09 05:44:38.136704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:44.091 [2024-12-09 05:44:38.136769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:49:44.091 [2024-12-09 05:44:38.136773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:49:44.091 05:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:49:47.372 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:49:47.372 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:49:47.630 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:49:47.630 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:49:47.888 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:49:47.888 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:49:47.888 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:49:47.888 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:49:47.888 05:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:49:48.146 [2024-12-09 05:44:42.260396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:48.146 05:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:49:48.403 05:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:49:48.403 05:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:48.661 05:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:49:48.661 05:44:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:49:48.919 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:49.177 [2024-12-09 05:44:43.364422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:49.178 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:49:49.436 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:49:49.436 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:49:49.436 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:49:49.436 05:44:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:49:50.809 Initializing NVMe Controllers 00:49:50.809 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:49:50.809 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:49:50.809 Initialization complete. Launching workers. 00:49:50.809 ======================================================== 00:49:50.809 Latency(us) 00:49:50.809 Device Information : IOPS MiB/s Average min max 00:49:50.809 PCIE (0000:88:00.0) NSID 1 from core 0: 85288.18 333.16 374.38 49.03 7228.26 00:49:50.809 ======================================================== 00:49:50.809 Total : 85288.18 333.16 374.38 49.03 7228.26 00:49:50.809 00:49:50.809 05:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:52.187 Initializing NVMe Controllers 00:49:52.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:52.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:52.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:52.187 Initialization complete. Launching workers. 00:49:52.187 ======================================================== 00:49:52.187 Latency(us) 00:49:52.187 Device Information : IOPS MiB/s Average min max 00:49:52.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.00 0.39 10301.77 166.26 44837.09 00:49:52.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20497.67 6980.90 47900.60 00:49:52.187 ======================================================== 00:49:52.187 Total : 152.00 0.59 13722.76 166.26 47900.60 00:49:52.187 00:49:52.187 05:44:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:53.557 Initializing NVMe Controllers 00:49:53.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:53.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:53.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:53.557 Initialization complete. Launching workers. 00:49:53.557 ======================================================== 00:49:53.557 Latency(us) 00:49:53.557 Device Information : IOPS MiB/s Average min max 00:49:53.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8393.11 32.79 3812.98 564.24 10726.93 00:49:53.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3779.64 14.76 8494.78 6836.41 19260.83 00:49:53.557 ======================================================== 00:49:53.557 Total : 12172.74 47.55 5266.68 564.24 19260.83 00:49:53.557 00:49:53.557 05:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:49:53.557 05:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:49:53.557 05:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:56.081 Initializing NVMe Controllers 00:49:56.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:56.081 Controller IO queue size 128, less than required. 00:49:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:56.081 Controller IO queue size 128, less than required. 00:49:56.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:56.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:56.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:56.081 Initialization complete. Launching workers. 00:49:56.081 ======================================================== 00:49:56.081 Latency(us) 00:49:56.081 Device Information : IOPS MiB/s Average min max 00:49:56.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1730.36 432.59 74533.97 49443.46 116913.29 00:49:56.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.95 142.49 233357.20 109579.37 367656.38 00:49:56.081 ======================================================== 00:49:56.081 Total : 2300.32 575.08 113885.96 49443.46 367656.38 00:49:56.081 00:49:56.338 05:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:49:56.338 No valid NVMe controllers or AIO or URING devices found 00:49:56.338 Initializing NVMe Controllers 00:49:56.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:56.338 Controller IO queue size 128, less than required. 00:49:56.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:56.338 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:49:56.338 Controller IO queue size 128, less than required. 00:49:56.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:56.338 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:49:56.338 WARNING: Some requested NVMe devices were skipped 00:49:56.596 05:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:49:59.126 Initializing NVMe Controllers 00:49:59.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:49:59.126 Controller IO queue size 128, less than required. 00:49:59.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:59.126 Controller IO queue size 128, less than required. 00:49:59.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:49:59.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:49:59.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:49:59.126 Initialization complete. Launching workers. 00:49:59.126 00:49:59.126 ==================== 00:49:59.126 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:49:59.126 TCP transport: 00:49:59.126 polls: 12216 00:49:59.126 idle_polls: 9036 00:49:59.126 sock_completions: 3180 00:49:59.126 nvme_completions: 5981 00:49:59.126 submitted_requests: 8970 00:49:59.126 queued_requests: 1 00:49:59.126 00:49:59.126 ==================== 00:49:59.126 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:49:59.126 TCP transport: 00:49:59.126 polls: 12612 00:49:59.126 idle_polls: 8798 00:49:59.126 sock_completions: 3814 00:49:59.126 nvme_completions: 6279 00:49:59.126 submitted_requests: 9432 00:49:59.126 queued_requests: 1 00:49:59.126 ======================================================== 00:49:59.126 Latency(us) 00:49:59.126 Device Information : IOPS MiB/s Average min max 00:49:59.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1494.85 373.71 86606.46 62130.91 145757.30 00:49:59.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1569.34 392.34 82383.50 41598.34 132361.46 00:49:59.126 ======================================================== 00:49:59.126 Total : 3064.19 766.05 84443.65 41598.34 145757.30 00:49:59.126 00:49:59.384 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:49:59.384 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:59.642 rmmod nvme_tcp 00:49:59.642 rmmod nvme_fabrics 00:49:59.642 rmmod nvme_keyring 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 713046 ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 713046 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 713046 ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 713046 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713046 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713046' 00:49:59.642 killing process with pid 713046 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 713046 00:49:59.642 05:44:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 713046 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:01.548 05:44:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:50:03.454 00:50:03.454 real 0m21.941s 00:50:03.454 user 1m8.068s 00:50:03.454 sys 0m5.578s 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:50:03.454 ************************************ 00:50:03.454 END TEST nvmf_perf 00:50:03.454 ************************************ 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:03.454 ************************************ 00:50:03.454 START TEST nvmf_fio_host 00:50:03.454 ************************************ 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:50:03.454 * Looking for test storage... 00:50:03.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:03.454 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:03.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:03.455 --rc genhtml_branch_coverage=1 00:50:03.455 --rc genhtml_function_coverage=1 00:50:03.455 --rc genhtml_legend=1 00:50:03.455 --rc geninfo_all_blocks=1 00:50:03.455 --rc geninfo_unexecuted_blocks=1 00:50:03.455 00:50:03.455 ' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:03.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:03.455 --rc genhtml_branch_coverage=1 00:50:03.455 --rc genhtml_function_coverage=1 00:50:03.455 --rc genhtml_legend=1 00:50:03.455 --rc geninfo_all_blocks=1 00:50:03.455 --rc geninfo_unexecuted_blocks=1 00:50:03.455 00:50:03.455 ' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:03.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:03.455 --rc genhtml_branch_coverage=1 00:50:03.455 --rc genhtml_function_coverage=1 00:50:03.455 --rc genhtml_legend=1 00:50:03.455 --rc geninfo_all_blocks=1 00:50:03.455 --rc geninfo_unexecuted_blocks=1 00:50:03.455 00:50:03.455 ' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:03.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:03.455 --rc genhtml_branch_coverage=1 00:50:03.455 --rc genhtml_function_coverage=1 00:50:03.455 --rc genhtml_legend=1 00:50:03.455 --rc geninfo_all_blocks=1 00:50:03.455 --rc geninfo_unexecuted_blocks=1 00:50:03.455 00:50:03.455 ' 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.455 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.715 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:03.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:50:03.716 05:44:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:50:06.245 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:50:06.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:50:06.246 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:50:06.246 Found net devices under 0000:0a:00.0: cvl_0_0 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:50:06.246 Found net devices under 0000:0a:00.1: cvl_0_1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:50:06.246 05:44:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:06.246 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:06.246 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:50:06.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:06.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:50:06.247 00:50:06.247 --- 10.0.0.2 ping statistics --- 00:50:06.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:06.247 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:06.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:06.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:50:06.247 00:50:06.247 --- 10.0.0.1 ping statistics --- 00:50:06.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:06.247 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=717159 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 717159 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 717159 ']' 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:06.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:50:06.247 [2024-12-09 05:45:00.174862] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:06.247 [2024-12-09 05:45:00.174946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:06.247 [2024-12-09 05:45:00.255707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:50:06.247 [2024-12-09 05:45:00.318920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:06.247 [2024-12-09 05:45:00.318968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:06.247 [2024-12-09 05:45:00.318994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:06.247 [2024-12-09 05:45:00.319005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:06.247 [2024-12-09 05:45:00.319015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:06.247 [2024-12-09 05:45:00.320619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:06.247 [2024-12-09 05:45:00.320749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:50:06.247 [2024-12-09 05:45:00.320808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:50:06.247 [2024-12-09 05:45:00.320811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:50:06.247 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:50:06.505 [2024-12-09 05:45:00.706615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:06.763 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:50:06.763 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:06.763 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:50:06.763 05:45:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:50:07.020 Malloc1 00:50:07.021 05:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:50:07.277 05:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:50:07.548 05:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:07.806 [2024-12-09 05:45:01.892488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:07.806 05:45:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:50:08.064 05:45:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:50:08.322 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:50:08.322 fio-3.35 00:50:08.322 Starting 1 thread 00:50:10.851 00:50:10.851 test: (groupid=0, jobs=1): err= 0: pid=717618: Mon Dec 9 05:45:04 2024 00:50:10.851 read: IOPS=8063, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2008msec) 00:50:10.851 slat (usec): min=2, max=204, avg= 2.69, stdev= 2.27 00:50:10.851 clat (usec): min=2874, max=15382, avg=8656.38, stdev=740.53 00:50:10.851 lat (usec): min=2906, max=15385, avg=8659.07, stdev=740.42 00:50:10.851 clat percentiles (usec): 00:50:10.851 | 1.00th=[ 6915], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8029], 00:50:10.851 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:50:10.851 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9765], 00:50:10.851 | 99.00th=[10290], 99.50th=[10421], 99.90th=[12125], 99.95th=[13435], 00:50:10.851 | 99.99th=[15270] 00:50:10.851 bw ( KiB/s): min=31288, max=32728, per=100.00%, avg=32266.00, stdev=660.42, samples=4 00:50:10.851 iops : min= 7822, max= 8182, avg=8066.50, stdev=165.11, samples=4 00:50:10.851 write: IOPS=8047, BW=31.4MiB/s (33.0MB/s)(63.1MiB/2008msec); 0 zone resets 00:50:10.851 slat (usec): min=2, max=132, avg= 2.78, stdev= 1.56 00:50:10.851 clat (usec): min=1494, max=15193, avg=7175.82, stdev=637.99 00:50:10.851 lat (usec): min=1503, max=15195, avg=7178.60, stdev=637.93 00:50:10.851 clat percentiles (usec): 00:50:10.851 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:50:10.851 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:50:10.851 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8029], 00:50:10.851 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[12256], 99.95th=[13435], 00:50:10.851 | 99.99th=[15139] 00:50:10.851 bw ( KiB/s): min=32016, max=32328, per=99.98%, avg=32184.00, stdev=133.07, samples=4 00:50:10.851 iops : min= 8004, max= 8082, avg=8046.00, stdev=33.27, samples=4 00:50:10.851 lat (msec) : 2=0.01%, 4=0.12%, 10=98.45%, 20=1.42% 00:50:10.851 cpu : usr=60.54%, sys=37.72%, ctx=68, majf=0, minf=35 00:50:10.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:50:10.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:10.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:50:10.851 issued rwts: total=16192,16160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:10.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:50:10.851 00:50:10.851 Run status group 0 (all jobs): 00:50:10.851 READ: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2008-2008msec 00:50:10.851 WRITE: bw=31.4MiB/s (33.0MB/s), 31.4MiB/s-31.4MiB/s (33.0MB/s-33.0MB/s), io=63.1MiB (66.2MB), run=2008-2008msec 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:50:10.851 05:45:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:50:11.114 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:50:11.114 fio-3.35 00:50:11.114 Starting 1 thread 00:50:13.648 00:50:13.648 test: (groupid=0, jobs=1): err= 0: pid=717962: Mon Dec 9 05:45:07 2024 00:50:13.648 read: IOPS=8213, BW=128MiB/s (135MB/s)(258MiB/2007msec) 00:50:13.648 slat (nsec): min=2868, max=94407, avg=3860.88, stdev=1691.93 00:50:13.648 clat (usec): min=2314, max=16513, avg=8849.60, stdev=1935.81 00:50:13.648 lat (usec): min=2318, max=16517, avg=8853.46, stdev=1935.83 00:50:13.648 clat percentiles (usec): 00:50:13.648 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7177], 00:50:13.648 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:50:13.648 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11469], 95.00th=[12256], 00:50:13.648 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15664], 99.95th=[15926], 00:50:13.648 | 99.99th=[16319] 00:50:13.648 bw ( KiB/s): min=56864, max=75936, per=51.64%, avg=67864.00, stdev=9298.83, samples=4 00:50:13.648 iops : min= 3554, max= 4746, avg=4241.50, stdev=581.18, samples=4 00:50:13.648 write: IOPS=4902, BW=76.6MiB/s (80.3MB/s)(138MiB/1807msec); 0 zone resets 00:50:13.648 slat (usec): min=30, max=129, avg=34.48, stdev= 4.85 00:50:13.648 clat (usec): min=5031, max=18835, avg=11722.94, stdev=1891.07 00:50:13.648 lat (usec): min=5067, max=18867, avg=11757.42, stdev=1890.99 00:50:13.648 clat percentiles (usec): 00:50:13.648 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:50:13.648 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:50:13.648 | 70.00th=[12649], 80.00th=[13173], 90.00th=[14222], 95.00th=[15008], 00:50:13.648 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18482], 99.95th=[18744], 00:50:13.648 | 99.99th=[18744] 00:50:13.648 bw ( KiB/s): min=59712, max=78400, per=89.90%, avg=70520.00, stdev=9140.74, samples=4 00:50:13.648 iops : min= 3732, max= 4900, avg=4407.50, stdev=571.30, samples=4 00:50:13.648 lat (msec) : 4=0.15%, 10=55.15%, 20=44.70% 00:50:13.648 cpu : usr=76.93%, sys=21.82%, ctx=53, majf=0, minf=59 00:50:13.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:50:13.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:13.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:50:13.648 issued rwts: total=16484,8859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:13.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:50:13.648 00:50:13.648 Run status group 0 (all jobs): 00:50:13.648 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=258MiB (270MB), run=2007-2007msec 00:50:13.648 WRITE: bw=76.6MiB/s (80.3MB/s), 76.6MiB/s-76.6MiB/s (80.3MB/s-80.3MB/s), io=138MiB (145MB), run=1807-1807msec 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:50:13.648 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:50:13.649 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:50:13.649 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:50:13.649 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:50:13.649 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:50:13.649 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:50:13.649 rmmod nvme_tcp 00:50:13.907 rmmod nvme_fabrics 00:50:13.907 rmmod nvme_keyring 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 717159 ']' 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 717159 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 717159 ']' 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 717159 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717159 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717159' 00:50:13.907 killing process with pid 717159 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 717159 00:50:13.907 05:45:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 717159 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:14.165 05:45:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:50:16.700 00:50:16.700 real 0m12.794s 00:50:16.700 user 0m37.601s 00:50:16.700 sys 0m4.208s 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:50:16.700 ************************************ 00:50:16.700 END TEST nvmf_fio_host 00:50:16.700 ************************************ 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:16.700 ************************************ 00:50:16.700 START TEST nvmf_failover 00:50:16.700 ************************************ 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:50:16.700 * Looking for test storage... 00:50:16.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:16.700 --rc genhtml_branch_coverage=1 00:50:16.700 --rc genhtml_function_coverage=1 00:50:16.700 --rc genhtml_legend=1 00:50:16.700 --rc geninfo_all_blocks=1 00:50:16.700 --rc geninfo_unexecuted_blocks=1 00:50:16.700 00:50:16.700 ' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:16.700 --rc genhtml_branch_coverage=1 00:50:16.700 --rc genhtml_function_coverage=1 00:50:16.700 --rc genhtml_legend=1 00:50:16.700 --rc geninfo_all_blocks=1 00:50:16.700 --rc geninfo_unexecuted_blocks=1 00:50:16.700 00:50:16.700 ' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:16.700 --rc genhtml_branch_coverage=1 00:50:16.700 --rc genhtml_function_coverage=1 00:50:16.700 --rc genhtml_legend=1 00:50:16.700 --rc geninfo_all_blocks=1 00:50:16.700 --rc geninfo_unexecuted_blocks=1 00:50:16.700 00:50:16.700 ' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:16.700 --rc genhtml_branch_coverage=1 00:50:16.700 --rc genhtml_function_coverage=1 00:50:16.700 --rc genhtml_legend=1 00:50:16.700 --rc geninfo_all_blocks=1 00:50:16.700 --rc geninfo_unexecuted_blocks=1 00:50:16.700 00:50:16.700 ' 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:16.700 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:16.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:50:16.701 05:45:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:50:18.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:50:18.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:50:18.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:50:18.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:50:18.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:18.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:50:18.602 00:50:18.602 --- 10.0.0.2 ping statistics --- 00:50:18.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:18.602 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:18.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:18.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:50:18.602 00:50:18.602 --- 10.0.0.1 ping statistics --- 00:50:18.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:18.602 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:18.602 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=720781 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 720781 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 720781 ']' 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:18.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:18.603 05:45:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:18.859 [2024-12-09 05:45:12.846747] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:18.859 [2024-12-09 05:45:12.846818] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:18.859 [2024-12-09 05:45:12.920142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:50:18.859 [2024-12-09 05:45:12.976771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:18.859 [2024-12-09 05:45:12.976839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:18.859 [2024-12-09 05:45:12.976862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:18.859 [2024-12-09 05:45:12.976872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:18.859 [2024-12-09 05:45:12.976882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:18.859 [2024-12-09 05:45:12.978300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:50:18.859 [2024-12-09 05:45:12.978378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:50:18.859 [2024-12-09 05:45:12.978382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:19.116 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:50:19.374 [2024-12-09 05:45:13.397206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:19.374 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:50:19.632 Malloc0 00:50:19.632 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:50:19.890 05:45:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:50:20.147 05:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:20.404 [2024-12-09 05:45:14.517043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:20.404 05:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:50:20.661 [2024-12-09 05:45:14.781768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:50:20.661 05:45:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:50:20.918 [2024-12-09 05:45:15.046663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=721067 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 721067 /var/tmp/bdevperf.sock 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 721067 ']' 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:20.918 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:21.176 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:21.176 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:50:21.176 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:21.740 NVMe0n1 00:50:21.740 05:45:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:21.997 00:50:21.997 05:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=721204 00:50:21.997 05:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:50:21.997 05:45:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:50:22.994 05:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:23.252 [2024-12-09 05:45:17.322758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.322996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 [2024-12-09 05:45:17.323416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2460 is same with the state(6) to be set 00:50:23.252 05:45:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:50:26.530 05:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:26.530 00:50:26.788 05:45:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:50:27.046 [2024-12-09 05:45:21.079814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.079996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 [2024-12-09 05:45:21.080189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c2f10 is same with the state(6) to be set 00:50:27.046 05:45:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:50:30.337 05:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:30.337 [2024-12-09 05:45:24.411151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:30.338 05:45:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:50:31.271 05:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:50:31.530 [2024-12-09 05:45:25.735958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.530 [2024-12-09 05:45:25.736631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188280 is same with the state(6) to be set 00:50:31.788 05:45:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 721204 00:50:37.057 { 00:50:37.057 "results": [ 00:50:37.057 { 00:50:37.057 "job": "NVMe0n1", 00:50:37.057 "core_mask": "0x1", 00:50:37.057 "workload": "verify", 00:50:37.057 "status": "finished", 00:50:37.057 "verify_range": { 00:50:37.057 "start": 0, 00:50:37.057 "length": 16384 00:50:37.057 }, 00:50:37.057 "queue_depth": 128, 00:50:37.057 "io_size": 4096, 00:50:37.057 "runtime": 15.014627, 00:50:37.057 "iops": 8219.984419193364, 00:50:37.057 "mibps": 32.10931413747408, 00:50:37.057 "io_failed": 8812, 00:50:37.057 "io_timeout": 0, 00:50:37.057 "avg_latency_us": 14506.18498056166, 00:50:37.057 "min_latency_us": 807.0637037037037, 00:50:37.057 "max_latency_us": 19612.254814814816 00:50:37.057 } 00:50:37.057 ], 00:50:37.057 "core_count": 1 00:50:37.057 } 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 721067 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 721067 ']' 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 721067 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:37.057 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 721067 00:50:37.316 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:37.316 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:37.316 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 721067' 00:50:37.316 killing process with pid 721067 00:50:37.316 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 721067 00:50:37.316 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 721067 00:50:37.585 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:50:37.585 [2024-12-09 05:45:15.113128] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:37.585 [2024-12-09 05:45:15.113223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721067 ] 00:50:37.585 [2024-12-09 05:45:15.181234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:37.585 [2024-12-09 05:45:15.239249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:37.585 Running I/O for 15 seconds... 00:50:37.585 8488.00 IOPS, 33.16 MiB/s [2024-12-09T04:45:31.810Z] [2024-12-09 05:45:17.325172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.585 [2024-12-09 05:45:17.325756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.585 [2024-12-09 05:45:17.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.325970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.325987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.586 [2024-12-09 05:45:17.326310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.586 [2024-12-09 05:45:17.326339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.586 [2024-12-09 05:45:17.326955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.586 [2024-12-09 05:45:17.326969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.326983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.326997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.587 [2024-12-09 05:45:17.327747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.327980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.327995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.587 [2024-12-09 05:45:17.328152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.587 [2024-12-09 05:45:17.328167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.588 [2024-12-09 05:45:17.328714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.328777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.328790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.328819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.328830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.328843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.328866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.328876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.328889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.328919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.328929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.328941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.328954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.328964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.328976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.328988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77264 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77280 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77288 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77296 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.588 [2024-12-09 05:45:17.329381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.588 [2024-12-09 05:45:17.329393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:50:37.588 [2024-12-09 05:45:17.329411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.588 [2024-12-09 05:45:17.329424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.589 [2024-12-09 05:45:17.329436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.589 [2024-12-09 05:45:17.329447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:50:37.589 [2024-12-09 05:45:17.329460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:17.329526] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:50:37.589 [2024-12-09 05:45:17.329568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.589 [2024-12-09 05:45:17.329587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:17.329603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.589 [2024-12-09 05:45:17.329616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:17.329630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.589 [2024-12-09 05:45:17.329643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:17.329657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.589 [2024-12-09 05:45:17.329671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:17.329683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:50:37.589 [2024-12-09 05:45:17.332971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:50:37.589 [2024-12-09 05:45:17.333010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6570 (9): Bad file descriptor 00:50:37.589 [2024-12-09 05:45:17.355393] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:50:37.589 8204.00 IOPS, 32.05 MiB/s [2024-12-09T04:45:31.814Z] 8245.67 IOPS, 32.21 MiB/s [2024-12-09T04:45:31.814Z] 8229.50 IOPS, 32.15 MiB/s [2024-12-09T04:45:31.814Z] [2024-12-09 05:45:21.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.080975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.080990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.589 [2024-12-09 05:45:21.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.589 [2024-12-09 05:45:21.081654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.081976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.081989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.590 [2024-12-09 05:45:21.082741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.590 [2024-12-09 05:45:21.082770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.590 [2024-12-09 05:45:21.082784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.082984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.082999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.591 [2024-12-09 05:45:21.083960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.083974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.591 [2024-12-09 05:45:21.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.591 [2024-12-09 05:45:21.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:21.084689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.592 [2024-12-09 05:45:21.084734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.592 [2024-12-09 05:45:21.084746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67680 len:8 PRP1 0x0 PRP2 0x0 00:50:37.592 [2024-12-09 05:45:21.084759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084823] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:50:37.592 [2024-12-09 05:45:21.084874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.592 [2024-12-09 05:45:21.084893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.592 [2024-12-09 05:45:21.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.592 [2024-12-09 05:45:21.084956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.084970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.592 [2024-12-09 05:45:21.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:21.085003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:50:37.592 [2024-12-09 05:45:21.085060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6570 (9): Bad file descriptor 00:50:37.592 [2024-12-09 05:45:21.088373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:50:37.592 8007.60 IOPS, 31.28 MiB/s [2024-12-09T04:45:31.817Z] [2024-12-09 05:45:21.207311] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:50:37.592 8028.33 IOPS, 31.36 MiB/s [2024-12-09T04:45:31.817Z] 8079.00 IOPS, 31.56 MiB/s [2024-12-09T04:45:31.817Z] 8107.00 IOPS, 31.67 MiB/s [2024-12-09T04:45:31.817Z] 8145.11 IOPS, 31.82 MiB/s [2024-12-09T04:45:31.817Z] [2024-12-09 05:45:25.737352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.592 [2024-12-09 05:45:25.737678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.592 [2024-12-09 05:45:25.737693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.593 [2024-12-09 05:45:25.737906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.737976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.737989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.593 [2024-12-09 05:45:25.738839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.593 [2024-12-09 05:45:25.738853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.738867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.738881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.738895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.738908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.738922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.738935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.738950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.738963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.738977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.738991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.594 [2024-12-09 05:45:25.739886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.594 [2024-12-09 05:45:25.739900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.739914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.595 [2024-12-09 05:45:25.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.739944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.739957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.739976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.595 [2024-12-09 05:45:25.740163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.595 [2024-12-09 05:45:25.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.595 [2024-12-09 05:45:25.740220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:37.595 [2024-12-09 05:45:25.740513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.740975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.740988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.741002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.741015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.741029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.741043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.741057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.595 [2024-12-09 05:45:25.741086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.595 [2024-12-09 05:45:25.741099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.596 [2024-12-09 05:45:25.741132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.596 [2024-12-09 05:45:25.741159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:37.596 [2024-12-09 05:45:25.741188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:50:37.596 [2024-12-09 05:45:25.741232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:50:37.596 [2024-12-09 05:45:25.741244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14776 len:8 PRP1 0x0 PRP2 0x0 00:50:37.596 [2024-12-09 05:45:25.741281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741361] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:50:37.596 [2024-12-09 05:45:25.741400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.596 [2024-12-09 05:45:25.741418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.596 [2024-12-09 05:45:25.741446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.596 [2024-12-09 05:45:25.741474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:37.596 [2024-12-09 05:45:25.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:37.596 [2024-12-09 05:45:25.741514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:50:37.596 [2024-12-09 05:45:25.744870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:50:37.596 [2024-12-09 05:45:25.744912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c6570 (9): Bad file descriptor 00:50:37.596 [2024-12-09 05:45:25.813323] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:50:37.596 8107.90 IOPS, 31.67 MiB/s [2024-12-09T04:45:31.821Z] 8138.91 IOPS, 31.79 MiB/s [2024-12-09T04:45:31.821Z] 8165.75 IOPS, 31.90 MiB/s [2024-12-09T04:45:31.821Z] 8191.54 IOPS, 32.00 MiB/s [2024-12-09T04:45:31.821Z] 8205.86 IOPS, 32.05 MiB/s [2024-12-09T04:45:31.821Z] 8219.47 IOPS, 32.11 MiB/s 00:50:37.596 Latency(us) 00:50:37.596 [2024-12-09T04:45:31.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:37.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:37.596 Verification LBA range: start 0x0 length 0x4000 00:50:37.596 NVMe0n1 : 15.01 8219.98 32.11 586.89 0.00 14506.18 807.06 19612.25 00:50:37.596 [2024-12-09T04:45:31.821Z] =================================================================================================================== 00:50:37.596 [2024-12-09T04:45:31.821Z] Total : 8219.98 32.11 586.89 0.00 14506.18 807.06 19612.25 00:50:37.596 Received shutdown signal, test time was about 15.000000 seconds 00:50:37.596 00:50:37.596 Latency(us) 00:50:37.596 [2024-12-09T04:45:31.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:37.596 [2024-12-09T04:45:31.821Z] =================================================================================================================== 00:50:37.596 [2024-12-09T04:45:31.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=723052 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 723052 /var/tmp/bdevperf.sock 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 723052 ']' 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:50:37.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:37.596 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:37.855 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:37.855 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:50:37.855 05:45:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:50:38.113 [2024-12-09 05:45:32.087021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:50:38.113 05:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:50:38.373 [2024-12-09 05:45:32.347736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:50:38.373 05:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:38.631 NVMe0n1 00:50:38.631 05:45:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:39.201 00:50:39.201 05:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:50:39.460 00:50:39.460 05:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:39.460 05:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:50:40.029 05:45:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:40.030 05:45:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:50:43.321 05:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:43.321 05:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:50:43.321 05:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=723723 00:50:43.321 05:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:50:43.321 05:45:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 723723 00:50:44.698 { 00:50:44.698 "results": [ 00:50:44.698 { 00:50:44.698 "job": "NVMe0n1", 00:50:44.698 "core_mask": "0x1", 00:50:44.698 "workload": "verify", 00:50:44.698 "status": "finished", 00:50:44.698 "verify_range": { 00:50:44.698 "start": 0, 00:50:44.698 "length": 16384 00:50:44.698 }, 00:50:44.698 "queue_depth": 128, 00:50:44.698 "io_size": 4096, 00:50:44.698 "runtime": 1.013736, 00:50:44.698 "iops": 8387.785380020045, 00:50:44.698 "mibps": 32.7647866407033, 00:50:44.698 "io_failed": 0, 00:50:44.698 "io_timeout": 0, 00:50:44.698 "avg_latency_us": 15189.560362573558, 00:50:44.698 "min_latency_us": 3276.8, 00:50:44.698 "max_latency_us": 13398.471111111112 00:50:44.698 } 00:50:44.698 ], 00:50:44.698 "core_count": 1 00:50:44.698 } 00:50:44.698 05:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:50:44.698 [2024-12-09 05:45:31.601480] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:44.698 [2024-12-09 05:45:31.601581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723052 ] 00:50:44.698 [2024-12-09 05:45:31.670812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:44.698 [2024-12-09 05:45:31.726359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:44.698 [2024-12-09 05:45:34.195109] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:50:44.698 [2024-12-09 05:45:34.195205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:44.698 [2024-12-09 05:45:34.195227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:44.698 [2024-12-09 05:45:34.195243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:44.698 [2024-12-09 05:45:34.195257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:44.698 [2024-12-09 05:45:34.195280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:44.698 [2024-12-09 05:45:34.195297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:44.698 [2024-12-09 05:45:34.195312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:44.698 [2024-12-09 05:45:34.195326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:44.698 [2024-12-09 05:45:34.195346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:50:44.698 [2024-12-09 05:45:34.195397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:50:44.698 [2024-12-09 05:45:34.195430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807570 (9): Bad file descriptor 00:50:44.698 [2024-12-09 05:45:34.208299] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:50:44.698 Running I/O for 1 seconds... 00:50:44.698 8342.00 IOPS, 32.59 MiB/s 00:50:44.698 Latency(us) 00:50:44.698 [2024-12-09T04:45:38.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:44.698 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:44.698 Verification LBA range: start 0x0 length 0x4000 00:50:44.698 NVMe0n1 : 1.01 8387.79 32.76 0.00 0.00 15189.56 3276.80 13398.47 00:50:44.698 [2024-12-09T04:45:38.923Z] =================================================================================================================== 00:50:44.698 [2024-12-09T04:45:38.923Z] Total : 8387.79 32.76 0.00 0.00 15189.56 3276.80 13398.47 00:50:44.698 05:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:44.698 05:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:50:44.698 05:45:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:44.956 05:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:44.956 05:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:50:45.524 05:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:50:45.524 05:45:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:50:48.812 05:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:50:48.812 05:45:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 723052 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 723052 ']' 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 723052 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:48.812 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723052 00:50:49.070 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:49.070 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:49.070 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723052' 00:50:49.070 killing process with pid 723052 00:50:49.070 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 723052 00:50:49.070 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 723052 00:50:49.327 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:50:49.327 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:50:49.585 rmmod nvme_tcp 00:50:49.585 rmmod nvme_fabrics 00:50:49.585 rmmod nvme_keyring 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 720781 ']' 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 720781 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 720781 ']' 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 720781 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 720781 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 720781' 00:50:49.585 killing process with pid 720781 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 720781 00:50:49.585 05:45:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 720781 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:49.844 05:45:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:50:52.380 00:50:52.380 real 0m35.700s 00:50:52.380 user 2m6.054s 00:50:52.380 sys 0m5.812s 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:50:52.380 ************************************ 00:50:52.380 END TEST nvmf_failover 00:50:52.380 ************************************ 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.380 ************************************ 00:50:52.380 START TEST nvmf_host_discovery 00:50:52.380 ************************************ 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:50:52.380 * Looking for test storage... 00:50:52.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:52.380 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:52.381 --rc genhtml_branch_coverage=1 00:50:52.381 --rc genhtml_function_coverage=1 00:50:52.381 --rc genhtml_legend=1 00:50:52.381 --rc geninfo_all_blocks=1 00:50:52.381 --rc geninfo_unexecuted_blocks=1 00:50:52.381 00:50:52.381 ' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:52.381 --rc genhtml_branch_coverage=1 00:50:52.381 --rc genhtml_function_coverage=1 00:50:52.381 --rc genhtml_legend=1 00:50:52.381 --rc geninfo_all_blocks=1 00:50:52.381 --rc geninfo_unexecuted_blocks=1 00:50:52.381 00:50:52.381 ' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:52.381 --rc genhtml_branch_coverage=1 00:50:52.381 --rc genhtml_function_coverage=1 00:50:52.381 --rc genhtml_legend=1 00:50:52.381 --rc geninfo_all_blocks=1 00:50:52.381 --rc geninfo_unexecuted_blocks=1 00:50:52.381 00:50:52.381 ' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:52.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:52.381 --rc genhtml_branch_coverage=1 00:50:52.381 --rc genhtml_function_coverage=1 00:50:52.381 --rc genhtml_legend=1 00:50:52.381 --rc geninfo_all_blocks=1 00:50:52.381 --rc geninfo_unexecuted_blocks=1 00:50:52.381 00:50:52.381 ' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:52.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:50:52.381 05:45:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:50:54.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:50:54.285 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:50:54.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:50:54.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:50:54.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:50:54.286 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:54.544 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:54.544 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:54.544 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:50:54.544 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:50:54.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:54.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:50:54.544 00:50:54.544 --- 10.0.0.2 ping statistics --- 00:50:54.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:54.545 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:54.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:54.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:50:54.545 00:50:54.545 --- 10.0.0.1 ping statistics --- 00:50:54.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:54.545 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=726453 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 726453 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 726453 ']' 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:54.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:54.545 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.545 [2024-12-09 05:45:48.606920] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:54.545 [2024-12-09 05:45:48.606993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:54.545 [2024-12-09 05:45:48.681192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:54.545 [2024-12-09 05:45:48.738747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:54.545 [2024-12-09 05:45:48.738822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:54.545 [2024-12-09 05:45:48.738836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:54.545 [2024-12-09 05:45:48.738848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:54.545 [2024-12-09 05:45:48.738857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:54.545 [2024-12-09 05:45:48.739504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 [2024-12-09 05:45:48.890190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 [2024-12-09 05:45:48.898465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 null0 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 null1 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=726480 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 726480 /tmp/host.sock 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 726480 ']' 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:50:54.803 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:54.803 05:45:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:54.803 [2024-12-09 05:45:48.973399] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:50:54.803 [2024-12-09 05:45:48.973490] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726480 ] 00:50:55.062 [2024-12-09 05:45:49.042465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:55.062 [2024-12-09 05:45:49.100171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:55.062 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 [2024-12-09 05:45:49.496019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:55.321 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:50:55.580 05:45:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:50:56.145 [2024-12-09 05:45:50.237410] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:50:56.145 [2024-12-09 05:45:50.237446] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:50:56.145 [2024-12-09 05:45:50.237478] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:56.145 [2024-12-09 05:45:50.324782] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:50:56.403 [2024-12-09 05:45:50.547082] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:50:56.403 [2024-12-09 05:45:50.548034] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb1efe0:1 started. 00:50:56.403 [2024-12-09 05:45:50.549973] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:50:56.403 [2024-12-09 05:45:50.550009] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:50:56.403 [2024-12-09 05:45:50.556241] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb1efe0 was disconnected and freed. delete nvme_qpair. 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:50:56.661 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:56.662 [2024-12-09 05:45:50.849800] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb1f1c0:1 started. 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:56.662 [2024-12-09 05:45:50.856838] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb1f1c0 was disconnected and freed. delete nvme_qpair. 00:50:56.662 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:50:56.920 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.921 [2024-12-09 05:45:50.924788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:50:56.921 [2024-12-09 05:45:50.925039] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:50:56.921 [2024-12-09 05:45:50.925068] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:56.921 05:45:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:56.921 [2024-12-09 05:45:51.012322] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:50:56.921 05:45:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:50:56.921 [2024-12-09 05:45:51.072034] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:50:56.921 [2024-12-09 05:45:51.072088] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:50:56.921 [2024-12-09 05:45:51.072104] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:50:56.921 [2024-12-09 05:45:51.072112] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:57.875 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.134 [2024-12-09 05:45:52.129319] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:50:58.134 [2024-12-09 05:45:52.129364] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:58.134 [2024-12-09 05:45:52.131214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:58.134 [2024-12-09 05:45:52.131249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:58.134 [2024-12-09 05:45:52.131299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:58.134 [2024-12-09 05:45:52.131327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:58.134 [2024-12-09 05:45:52.131342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:58.134 [2024-12-09 05:45:52.131356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:58.134 [2024-12-09 05:45:52.131370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:58.134 [2024-12-09 05:45:52.131384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:58.134 [2024-12-09 05:45:52.131397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:58.134 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:58.135 [2024-12-09 05:45:52.141206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.135 [2024-12-09 05:45:52.151266] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.135 [2024-12-09 05:45:52.151297] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.135 [2024-12-09 05:45:52.151309] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.151332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.135 [2024-12-09 05:45:52.151367] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.151521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.135 [2024-12-09 05:45:52.151551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.135 [2024-12-09 05:45:52.151568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.135 [2024-12-09 05:45:52.151607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.135 [2024-12-09 05:45:52.151654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.135 [2024-12-09 05:45:52.151671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.135 [2024-12-09 05:45:52.151687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.135 [2024-12-09 05:45:52.151699] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.135 [2024-12-09 05:45:52.151725] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.135 [2024-12-09 05:45:52.151734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.135 [2024-12-09 05:45:52.161400] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.135 [2024-12-09 05:45:52.161421] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.135 [2024-12-09 05:45:52.161431] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.161438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.135 [2024-12-09 05:45:52.161463] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.161592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.135 [2024-12-09 05:45:52.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.135 [2024-12-09 05:45:52.161637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.135 [2024-12-09 05:45:52.161660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.135 [2024-12-09 05:45:52.161693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.135 [2024-12-09 05:45:52.161711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.135 [2024-12-09 05:45:52.161725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.135 [2024-12-09 05:45:52.161738] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.135 [2024-12-09 05:45:52.161747] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.135 [2024-12-09 05:45:52.161755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.135 [2024-12-09 05:45:52.171499] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.135 [2024-12-09 05:45:52.171522] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.135 [2024-12-09 05:45:52.171531] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.171539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.135 [2024-12-09 05:45:52.171570] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.171739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.135 [2024-12-09 05:45:52.171767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.135 [2024-12-09 05:45:52.171785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.135 [2024-12-09 05:45:52.171808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.135 [2024-12-09 05:45:52.171872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.135 [2024-12-09 05:45:52.171892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.135 [2024-12-09 05:45:52.171906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.135 [2024-12-09 05:45:52.171919] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.135 [2024-12-09 05:45:52.171928] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.135 [2024-12-09 05:45:52.171936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.135 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:58.135 [2024-12-09 05:45:52.181604] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.135 [2024-12-09 05:45:52.181626] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.135 [2024-12-09 05:45:52.181636] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.181643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.135 [2024-12-09 05:45:52.181667] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.181896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.135 [2024-12-09 05:45:52.181925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.135 [2024-12-09 05:45:52.181948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.135 [2024-12-09 05:45:52.181973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.135 [2024-12-09 05:45:52.182008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.135 [2024-12-09 05:45:52.182025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.135 [2024-12-09 05:45:52.182039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.135 [2024-12-09 05:45:52.182052] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.135 [2024-12-09 05:45:52.182060] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.135 [2024-12-09 05:45:52.182068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.135 [2024-12-09 05:45:52.191701] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.135 [2024-12-09 05:45:52.191723] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.135 [2024-12-09 05:45:52.191733] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.191740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.135 [2024-12-09 05:45:52.191765] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.135 [2024-12-09 05:45:52.191886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.135 [2024-12-09 05:45:52.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.135 [2024-12-09 05:45:52.191944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.135 [2024-12-09 05:45:52.191966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.136 [2024-12-09 05:45:52.191986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.136 [2024-12-09 05:45:52.191999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.136 [2024-12-09 05:45:52.192011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.136 [2024-12-09 05:45:52.192023] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.136 [2024-12-09 05:45:52.192032] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.136 [2024-12-09 05:45:52.192039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.136 [2024-12-09 05:45:52.201800] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.136 [2024-12-09 05:45:52.201820] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.136 [2024-12-09 05:45:52.201829] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.136 [2024-12-09 05:45:52.201837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.136 [2024-12-09 05:45:52.201860] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.136 [2024-12-09 05:45:52.202053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.136 [2024-12-09 05:45:52.202081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.136 [2024-12-09 05:45:52.202102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.136 [2024-12-09 05:45:52.202126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.136 [2024-12-09 05:45:52.202147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.136 [2024-12-09 05:45:52.202162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.136 [2024-12-09 05:45:52.202176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.136 [2024-12-09 05:45:52.202188] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.136 [2024-12-09 05:45:52.202197] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.136 [2024-12-09 05:45:52.202205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.136 [2024-12-09 05:45:52.211894] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:50:58.136 [2024-12-09 05:45:52.211914] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:50:58.136 [2024-12-09 05:45:52.211923] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:50:58.136 [2024-12-09 05:45:52.211930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:50:58.136 [2024-12-09 05:45:52.211953] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:50:58.136 [2024-12-09 05:45:52.212176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:50:58.136 [2024-12-09 05:45:52.212203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf10e0 with addr=10.0.0.2, port=4420 00:50:58.136 [2024-12-09 05:45:52.212219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10e0 is same with the state(6) to be set 00:50:58.136 [2024-12-09 05:45:52.212242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf10e0 (9): Bad file descriptor 00:50:58.136 [2024-12-09 05:45:52.212290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:50:58.136 [2024-12-09 05:45:52.212307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:50:58.136 [2024-12-09 05:45:52.212321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:50:58.136 [2024-12-09 05:45:52.212334] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:50:58.136 [2024-12-09 05:45:52.212343] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:50:58.136 [2024-12-09 05:45:52.212350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:50:58.136 [2024-12-09 05:45:52.215733] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:50:58.136 [2024-12-09 05:45:52.215778] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:50:58.136 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:58.394 05:45:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.325 [2024-12-09 05:45:53.517460] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:50:59.325 [2024-12-09 05:45:53.517498] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:50:59.325 [2024-12-09 05:45:53.517520] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:59.602 [2024-12-09 05:45:53.603810] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:50:59.602 [2024-12-09 05:45:53.668503] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:50:59.602 [2024-12-09 05:45:53.669281] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb2ada0:1 started. 00:50:59.602 [2024-12-09 05:45:53.671365] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:50:59.602 [2024-12-09 05:45:53.671405] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:59.602 [2024-12-09 05:45:53.674533] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb2ada0 was disconnected and freed. delete nvme_qpair. 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.602 request: 00:50:59.602 { 00:50:59.602 "name": "nvme", 00:50:59.602 "trtype": "tcp", 00:50:59.602 "traddr": "10.0.0.2", 00:50:59.602 "adrfam": "ipv4", 00:50:59.602 "trsvcid": "8009", 00:50:59.602 "hostnqn": "nqn.2021-12.io.spdk:test", 00:50:59.602 "wait_for_attach": true, 00:50:59.602 "method": "bdev_nvme_start_discovery", 00:50:59.602 "req_id": 1 00:50:59.602 } 00:50:59.602 Got JSON-RPC error response 00:50:59.602 response: 00:50:59.602 { 00:50:59.602 "code": -17, 00:50:59.602 "message": "File exists" 00:50:59.602 } 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.602 request: 00:50:59.602 { 00:50:59.602 "name": "nvme_second", 00:50:59.602 "trtype": "tcp", 00:50:59.602 "traddr": "10.0.0.2", 00:50:59.602 "adrfam": "ipv4", 00:50:59.602 "trsvcid": "8009", 00:50:59.602 "hostnqn": "nqn.2021-12.io.spdk:test", 00:50:59.602 "wait_for_attach": true, 00:50:59.602 "method": "bdev_nvme_start_discovery", 00:50:59.602 "req_id": 1 00:50:59.602 } 00:50:59.602 Got JSON-RPC error response 00:50:59.602 response: 00:50:59.602 { 00:50:59.602 "code": -17, 00:50:59.602 "message": "File exists" 00:50:59.602 } 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:50:59.602 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.603 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:50:59.603 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:50:59.860 05:45:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:00.793 [2024-12-09 05:45:54.890842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:00.793 [2024-12-09 05:45:54.890901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb073c0 with addr=10.0.0.2, port=8010 00:51:00.793 [2024-12-09 05:45:54.890924] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:51:00.793 [2024-12-09 05:45:54.890938] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:51:00.793 [2024-12-09 05:45:54.890951] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:51:01.734 [2024-12-09 05:45:55.893185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:01.734 [2024-12-09 05:45:55.893220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb073c0 with addr=10.0.0.2, port=8010 00:51:01.734 [2024-12-09 05:45:55.893241] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:51:01.734 [2024-12-09 05:45:55.893254] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:51:01.734 [2024-12-09 05:45:55.893266] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:51:03.111 [2024-12-09 05:45:56.895496] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:51:03.111 request: 00:51:03.111 { 00:51:03.111 "name": "nvme_second", 00:51:03.111 "trtype": "tcp", 00:51:03.111 "traddr": "10.0.0.2", 00:51:03.111 "adrfam": "ipv4", 00:51:03.111 "trsvcid": "8010", 00:51:03.111 "hostnqn": "nqn.2021-12.io.spdk:test", 00:51:03.111 "wait_for_attach": false, 00:51:03.111 "attach_timeout_ms": 3000, 00:51:03.111 "method": "bdev_nvme_start_discovery", 00:51:03.111 "req_id": 1 00:51:03.111 } 00:51:03.111 Got JSON-RPC error response 00:51:03.111 response: 00:51:03.111 { 00:51:03.111 "code": -110, 00:51:03.111 "message": "Connection timed out" 00:51:03.111 } 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 726480 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:03.111 rmmod nvme_tcp 00:51:03.111 rmmod nvme_fabrics 00:51:03.111 rmmod nvme_keyring 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 726453 ']' 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 726453 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 726453 ']' 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 726453 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:03.111 05:45:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726453 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726453' 00:51:03.111 killing process with pid 726453 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 726453 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 726453 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:03.111 05:45:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:51:05.652 00:51:05.652 real 0m13.245s 00:51:05.652 user 0m18.886s 00:51:05.652 sys 0m2.914s 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:05.652 ************************************ 00:51:05.652 END TEST nvmf_host_discovery 00:51:05.652 ************************************ 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:05.652 ************************************ 00:51:05.652 START TEST nvmf_host_multipath_status 00:51:05.652 ************************************ 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:51:05.652 * Looking for test storage... 00:51:05.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:51:05.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.652 --rc genhtml_branch_coverage=1 00:51:05.652 --rc genhtml_function_coverage=1 00:51:05.652 --rc genhtml_legend=1 00:51:05.652 --rc geninfo_all_blocks=1 00:51:05.652 --rc geninfo_unexecuted_blocks=1 00:51:05.652 00:51:05.652 ' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:51:05.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.652 --rc genhtml_branch_coverage=1 00:51:05.652 --rc genhtml_function_coverage=1 00:51:05.652 --rc genhtml_legend=1 00:51:05.652 --rc geninfo_all_blocks=1 00:51:05.652 --rc geninfo_unexecuted_blocks=1 00:51:05.652 00:51:05.652 ' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:51:05.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.652 --rc genhtml_branch_coverage=1 00:51:05.652 --rc genhtml_function_coverage=1 00:51:05.652 --rc genhtml_legend=1 00:51:05.652 --rc geninfo_all_blocks=1 00:51:05.652 --rc geninfo_unexecuted_blocks=1 00:51:05.652 00:51:05.652 ' 00:51:05.652 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:51:05.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:05.652 --rc genhtml_branch_coverage=1 00:51:05.652 --rc genhtml_function_coverage=1 00:51:05.652 --rc genhtml_legend=1 00:51:05.652 --rc geninfo_all_blocks=1 00:51:05.652 --rc geninfo_unexecuted_blocks=1 00:51:05.652 00:51:05.652 ' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:05.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:51:05.653 05:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:51:07.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:51:07.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:51:07.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:51:07.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:51:07.566 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:07.567 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:51:07.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:07.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:51:07.826 00:51:07.826 --- 10.0.0.2 ping statistics --- 00:51:07.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:07.826 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:07.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:07.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:51:07.826 00:51:07.826 --- 10.0.0.1 ping statistics --- 00:51:07.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:07.826 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=729535 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 729535 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 729535 ']' 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:07.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:07.826 05:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:07.826 [2024-12-09 05:46:01.872484] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:51:07.826 [2024-12-09 05:46:01.872570] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:07.826 [2024-12-09 05:46:01.941711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:07.826 [2024-12-09 05:46:01.995662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:07.826 [2024-12-09 05:46:01.995729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:07.826 [2024-12-09 05:46:01.995757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:07.826 [2024-12-09 05:46:01.995768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:07.826 [2024-12-09 05:46:01.995782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:07.826 [2024-12-09 05:46:01.997200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:07.826 [2024-12-09 05:46:01.997205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=729535 00:51:08.084 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:51:08.342 [2024-12-09 05:46:02.383235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:08.342 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:51:08.600 Malloc0 00:51:08.600 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:51:08.858 05:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:51:09.116 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:51:09.375 [2024-12-09 05:46:03.477298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:09.375 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:51:09.634 [2024-12-09 05:46:03.750017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=729821 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 729821 /var/tmp/bdevperf.sock 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 729821 ']' 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:09.634 05:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:09.892 05:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:09.892 05:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:51:09.892 05:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:51:10.150 05:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:51:10.716 Nvme0n1 00:51:10.716 05:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:51:11.280 Nvme0n1 00:51:11.280 05:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:51:11.280 05:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:51:13.261 05:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:51:13.261 05:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:51:13.519 05:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:51:13.776 05:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:51:14.710 05:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:51:14.710 05:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:14.710 05:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:14.710 05:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:14.967 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:14.967 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:14.967 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:14.967 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:15.226 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:15.226 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:15.226 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:15.226 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:15.484 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:15.484 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:15.484 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:15.484 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:15.742 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:15.742 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:15.742 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:15.742 05:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:51:16.307 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:16.565 05:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:51:17.131 05:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:51:18.064 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:51:18.064 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:51:18.064 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:18.064 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:18.322 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:18.322 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:18.322 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:18.322 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:18.580 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:18.580 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:18.580 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:18.580 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:18.838 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:18.838 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:18.838 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:18.838 05:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:19.096 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:19.096 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:19.096 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:19.096 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:19.354 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:19.354 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:19.354 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:19.354 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:19.613 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:19.613 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:51:19.613 05:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:19.871 05:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:51:20.129 05:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:51:21.062 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:51:21.062 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:21.319 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:21.319 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:21.576 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:21.576 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:21.576 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:21.576 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:21.833 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:21.833 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:21.833 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:21.833 05:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:22.091 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:22.091 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:22.091 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:22.091 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:22.348 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:22.348 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:22.348 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:22.348 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:22.606 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:22.606 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:22.606 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:22.606 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:22.864 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:22.864 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:51:22.864 05:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:23.122 05:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:51:23.378 05:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:51:24.308 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:51:24.308 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:24.308 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:24.308 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:24.871 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:24.871 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:24.871 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:24.871 05:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:24.871 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:24.871 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:24.871 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:24.871 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:25.127 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:25.127 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:25.127 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:25.127 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:25.400 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:25.400 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:25.400 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:25.400 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:25.963 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:25.963 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:51:25.963 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:25.963 05:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:25.963 05:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:25.963 05:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:51:25.963 05:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:51:26.220 05:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:51:26.786 05:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:51:27.719 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:51:27.719 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:51:27.719 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:27.719 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:27.977 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:27.977 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:27.977 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:27.977 05:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:28.235 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:28.235 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:28.235 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:28.235 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:28.493 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:28.493 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:28.493 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:28.493 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:28.752 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:28.752 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:51:28.752 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:28.752 05:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:29.010 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:29.010 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:51:29.010 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:29.010 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:29.268 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:29.268 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:51:29.268 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:51:29.526 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:51:29.784 05:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:51:30.717 05:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:51:30.717 05:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:51:30.717 05:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:30.717 05:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:30.976 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:30.976 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:30.976 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:30.976 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:31.233 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:31.233 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:31.233 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:31.233 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:31.490 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:31.490 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:31.490 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:31.490 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:31.749 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:31.749 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:51:31.749 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:31.749 05:46:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:32.315 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:51:32.573 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:51:32.573 05:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:51:32.832 05:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:51:33.090 05:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:34.467 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:34.725 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:34.725 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:34.725 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:34.725 05:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:34.983 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:34.983 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:34.983 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:34.983 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:35.240 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:35.240 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:35.240 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:35.240 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:35.498 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:35.498 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:35.498 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:35.498 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:35.755 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:35.755 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:51:35.755 05:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:36.320 05:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:51:36.578 05:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:51:37.509 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:51:37.509 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:51:37.509 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:37.509 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:37.766 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:37.766 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:37.766 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:37.766 05:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:38.023 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:38.023 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:38.023 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:38.023 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:38.280 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:38.280 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:38.280 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:38.280 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:38.537 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:38.537 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:38.537 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:38.537 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:38.795 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:38.795 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:38.795 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:38.795 05:46:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:39.362 05:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:39.362 05:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:51:39.362 05:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:39.362 05:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:51:39.620 05:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:51:41.001 05:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:51:41.001 05:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:41.001 05:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:41.001 05:46:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:41.001 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:41.001 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:51:41.001 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:41.001 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:41.257 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:41.257 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:41.257 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:41.257 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:41.515 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:41.515 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:41.515 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:41.515 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:41.772 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:41.772 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:41.772 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:41.772 05:46:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:42.029 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:42.029 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:51:42.029 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:42.029 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:42.287 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:42.287 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:51:42.287 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:51:42.545 05:46:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:51:43.112 05:46:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:51:44.046 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:51:44.046 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:51:44.046 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:44.046 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:51:44.303 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:44.303 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:51:44.304 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:44.304 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:51:44.562 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:44.562 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:51:44.562 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:44.562 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:51:44.820 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:44.820 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:51:44.820 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:44.820 05:46:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:51:45.077 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:45.077 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:51:45.077 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:45.077 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:51:45.335 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:51:45.335 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:51:45.335 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:51:45.335 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 729821 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 729821 ']' 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 729821 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729821 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729821' 00:51:45.593 killing process with pid 729821 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 729821 00:51:45.593 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 729821 00:51:45.593 { 00:51:45.593 "results": [ 00:51:45.593 { 00:51:45.593 "job": "Nvme0n1", 00:51:45.593 "core_mask": "0x4", 00:51:45.593 "workload": "verify", 00:51:45.593 "status": "terminated", 00:51:45.593 "verify_range": { 00:51:45.593 "start": 0, 00:51:45.593 "length": 16384 00:51:45.593 }, 00:51:45.593 "queue_depth": 128, 00:51:45.593 "io_size": 4096, 00:51:45.593 "runtime": 34.264639, 00:51:45.593 "iops": 7844.3260411995, 00:51:45.593 "mibps": 30.641898598435546, 00:51:45.593 "io_failed": 0, 00:51:45.593 "io_timeout": 0, 00:51:45.593 "avg_latency_us": 16289.54188358198, 00:51:45.593 "min_latency_us": 227.55555555555554, 00:51:45.593 "max_latency_us": 4026531.84 00:51:45.593 } 00:51:45.593 ], 00:51:45.593 "core_count": 1 00:51:45.593 } 00:51:45.854 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 729821 00:51:45.854 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:51:45.854 [2024-12-09 05:46:03.811880] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:51:45.854 [2024-12-09 05:46:03.811966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729821 ] 00:51:45.854 [2024-12-09 05:46:03.879938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:45.854 [2024-12-09 05:46:03.937254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:51:45.854 Running I/O for 90 seconds... 00:51:45.854 8232.00 IOPS, 32.16 MiB/s [2024-12-09T04:46:40.079Z] 8326.00 IOPS, 32.52 MiB/s [2024-12-09T04:46:40.079Z] 8312.67 IOPS, 32.47 MiB/s [2024-12-09T04:46:40.079Z] 8333.25 IOPS, 32.55 MiB/s [2024-12-09T04:46:40.080Z] 8332.80 IOPS, 32.55 MiB/s [2024-12-09T04:46:40.080Z] 8332.67 IOPS, 32.55 MiB/s [2024-12-09T04:46:40.080Z] 8329.57 IOPS, 32.54 MiB/s [2024-12-09T04:46:40.080Z] 8322.12 IOPS, 32.51 MiB/s [2024-12-09T04:46:40.080Z] 8331.00 IOPS, 32.54 MiB/s [2024-12-09T04:46:40.080Z] 8341.70 IOPS, 32.58 MiB/s [2024-12-09T04:46:40.080Z] 8338.91 IOPS, 32.57 MiB/s [2024-12-09T04:46:40.080Z] 8339.92 IOPS, 32.58 MiB/s [2024-12-09T04:46:40.080Z] 8334.00 IOPS, 32.55 MiB/s [2024-12-09T04:46:40.080Z] 8345.21 IOPS, 32.60 MiB/s [2024-12-09T04:46:40.080Z] [2024-12-09 05:46:20.423506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.423965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.423999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.424358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.424378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:45.855 [2024-12-09 05:46:20.426763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.855 [2024-12-09 05:46:20.426779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.426802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.426818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.426840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.426857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.426879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.426895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.426919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.426935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.426958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.426973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.427966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.428033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.428049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:45.856 [2024-12-09 05:46:20.428072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.856 [2024-12-09 05:46:20.428088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.428960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.428978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:45.857 [2024-12-09 05:46:20.429395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.857 [2024-12-09 05:46:20.429413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.429972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.429995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.858 [2024-12-09 05:46:20.430279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:45.858 8336.80 IOPS, 32.57 MiB/s [2024-12-09T04:46:40.083Z] [2024-12-09 05:46:20.430779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.430935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.858 [2024-12-09 05:46:20.430973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:45.858 [2024-12-09 05:46:20.431004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:20.431326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:20.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:20.431373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:51:45.859 7815.75 IOPS, 30.53 MiB/s [2024-12-09T04:46:40.084Z] 7356.00 IOPS, 28.73 MiB/s [2024-12-09T04:46:40.084Z] 6947.33 IOPS, 27.14 MiB/s [2024-12-09T04:46:40.084Z] 6581.68 IOPS, 25.71 MiB/s [2024-12-09T04:46:40.084Z] 6666.95 IOPS, 26.04 MiB/s [2024-12-09T04:46:40.084Z] 6750.90 IOPS, 26.37 MiB/s [2024-12-09T04:46:40.084Z] 6869.45 IOPS, 26.83 MiB/s [2024-12-09T04:46:40.084Z] 7041.09 IOPS, 27.50 MiB/s [2024-12-09T04:46:40.084Z] 7198.54 IOPS, 28.12 MiB/s [2024-12-09T04:46:40.084Z] 7339.72 IOPS, 28.67 MiB/s [2024-12-09T04:46:40.084Z] 7378.62 IOPS, 28.82 MiB/s [2024-12-09T04:46:40.084Z] 7416.30 IOPS, 28.97 MiB/s [2024-12-09T04:46:40.084Z] 7445.50 IOPS, 29.08 MiB/s [2024-12-09T04:46:40.084Z] 7532.17 IOPS, 29.42 MiB/s [2024-12-09T04:46:40.084Z] 7643.73 IOPS, 29.86 MiB/s [2024-12-09T04:46:40.084Z] 7756.77 IOPS, 30.30 MiB/s [2024-12-09T04:46:40.084Z] [2024-12-09 05:46:37.028578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.028975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.028992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.859 [2024-12-09 05:46:37.029360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:37.029399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:37.029438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.859 [2024-12-09 05:46:37.029477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:45.859 [2024-12-09 05:46:37.029500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.860 [2024-12-09 05:46:37.029772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.860 [2024-12-09 05:46:37.029812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.860 [2024-12-09 05:46:37.029849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.860 [2024-12-09 05:46:37.029886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.029982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.029998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:45.860 [2024-12-09 05:46:37.030917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.860 [2024-12-09 05:46:37.030934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.030957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.030974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.031347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.031604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.033537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.033584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:45.861 [2024-12-09 05:46:37.033671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.033975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:51:45.861 [2024-12-09 05:46:37.033997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:45.861 [2024-12-09 05:46:37.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:45.861 7813.16 IOPS, 30.52 MiB/s [2024-12-09T04:46:40.086Z] 7826.39 IOPS, 30.57 MiB/s [2024-12-09T04:46:40.086Z] 7841.26 IOPS, 30.63 MiB/s [2024-12-09T04:46:40.086Z] Received shutdown signal, test time was about 34.265435 seconds 00:51:45.861 00:51:45.861 Latency(us) 00:51:45.861 [2024-12-09T04:46:40.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:45.861 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:51:45.861 Verification LBA range: start 0x0 length 0x4000 00:51:45.861 Nvme0n1 : 34.26 7844.33 30.64 0.00 0.00 16289.54 227.56 4026531.84 00:51:45.861 [2024-12-09T04:46:40.086Z] =================================================================================================================== 00:51:45.861 [2024-12-09T04:46:40.086Z] Total : 7844.33 30.64 0.00 0.00 16289.54 227.56 4026531.84 00:51:45.862 05:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:51:46.120 rmmod nvme_tcp 00:51:46.120 rmmod nvme_fabrics 00:51:46.120 rmmod nvme_keyring 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 729535 ']' 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 729535 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 729535 ']' 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 729535 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:46.120 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729535 00:51:46.378 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:46.378 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:46.378 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729535' 00:51:46.378 killing process with pid 729535 00:51:46.378 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 729535 00:51:46.378 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 729535 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:46.638 05:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:51:48.541 00:51:48.541 real 0m43.281s 00:51:48.541 user 2m11.674s 00:51:48.541 sys 0m10.812s 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:51:48.541 ************************************ 00:51:48.541 END TEST nvmf_host_multipath_status 00:51:48.541 ************************************ 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:51:48.541 ************************************ 00:51:48.541 START TEST nvmf_discovery_remove_ifc 00:51:48.541 ************************************ 00:51:48.541 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:51:48.799 * Looking for test storage... 00:51:48.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:51:48.799 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:51:48.799 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:51:48.799 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:51:48.799 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:51:48.799 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:51:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:48.800 --rc genhtml_branch_coverage=1 00:51:48.800 --rc genhtml_function_coverage=1 00:51:48.800 --rc genhtml_legend=1 00:51:48.800 --rc geninfo_all_blocks=1 00:51:48.800 --rc geninfo_unexecuted_blocks=1 00:51:48.800 00:51:48.800 ' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:51:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:48.800 --rc genhtml_branch_coverage=1 00:51:48.800 --rc genhtml_function_coverage=1 00:51:48.800 --rc genhtml_legend=1 00:51:48.800 --rc geninfo_all_blocks=1 00:51:48.800 --rc geninfo_unexecuted_blocks=1 00:51:48.800 00:51:48.800 ' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:51:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:48.800 --rc genhtml_branch_coverage=1 00:51:48.800 --rc genhtml_function_coverage=1 00:51:48.800 --rc genhtml_legend=1 00:51:48.800 --rc geninfo_all_blocks=1 00:51:48.800 --rc geninfo_unexecuted_blocks=1 00:51:48.800 00:51:48.800 ' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:51:48.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:48.800 --rc genhtml_branch_coverage=1 00:51:48.800 --rc genhtml_function_coverage=1 00:51:48.800 --rc genhtml_legend=1 00:51:48.800 --rc geninfo_all_blocks=1 00:51:48.800 --rc geninfo_unexecuted_blocks=1 00:51:48.800 00:51:48.800 ' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:48.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:51:48.800 05:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:51:51.334 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:51:51.334 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:51:51.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:51:51.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:51:51.334 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:51:51.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:51.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:51:51.335 00:51:51.335 --- 10.0.0.2 ping statistics --- 00:51:51.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:51.335 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:51.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:51.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:51:51.335 00:51:51.335 --- 10.0.0.1 ping statistics --- 00:51:51.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:51.335 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=736285 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 736285 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 736285 ']' 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:51.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.335 [2024-12-09 05:46:45.313562] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:51:51.335 [2024-12-09 05:46:45.313664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:51.335 [2024-12-09 05:46:45.384947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:51.335 [2024-12-09 05:46:45.437226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:51.335 [2024-12-09 05:46:45.437303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:51.335 [2024-12-09 05:46:45.437317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:51.335 [2024-12-09 05:46:45.437328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:51.335 [2024-12-09 05:46:45.437346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:51.335 [2024-12-09 05:46:45.437955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:51.335 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.593 [2024-12-09 05:46:45.591411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:51.593 [2024-12-09 05:46:45.599600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:51:51.593 null0 00:51:51.593 [2024-12-09 05:46:45.631502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=736316 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 736316 /tmp/host.sock 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 736316 ']' 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:51:51.593 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:51.593 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.593 [2024-12-09 05:46:45.700842] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:51:51.593 [2024-12-09 05:46:45.700938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736316 ] 00:51:51.593 [2024-12-09 05:46:45.769141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:51.851 [2024-12-09 05:46:45.828356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.851 05:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:51.851 05:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:51.851 05:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:51:51.851 05:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:51.851 05:46:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:53.292 [2024-12-09 05:46:47.098893] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:51:53.292 [2024-12-09 05:46:47.098928] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:51:53.292 [2024-12-09 05:46:47.098955] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:53.292 [2024-12-09 05:46:47.226395] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:51:53.292 [2024-12-09 05:46:47.288050] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:51:53.292 [2024-12-09 05:46:47.289057] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x73efd0:1 started. 00:51:53.292 [2024-12-09 05:46:47.290810] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:51:53.292 [2024-12-09 05:46:47.290858] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:51:53.292 [2024-12-09 05:46:47.290895] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:51:53.292 [2024-12-09 05:46:47.290919] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:51:53.292 [2024-12-09 05:46:47.290950] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:51:53.292 [2024-12-09 05:46:47.338958] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x73efd0 was disconnected and freed. delete nvme_qpair. 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:53.292 05:46:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:54.329 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:54.330 05:46:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:55.263 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:55.521 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:55.521 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:55.521 05:46:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:56.459 05:46:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:57.391 05:46:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:58.782 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:58.783 05:46:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:51:58.783 [2024-12-09 05:46:52.732249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:51:58.783 [2024-12-09 05:46:52.732338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:58.783 [2024-12-09 05:46:52.732361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:58.783 [2024-12-09 05:46:52.732378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:58.783 [2024-12-09 05:46:52.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:58.783 [2024-12-09 05:46:52.732405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:58.783 [2024-12-09 05:46:52.732417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:58.783 [2024-12-09 05:46:52.732431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:58.783 [2024-12-09 05:46:52.732443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:58.783 [2024-12-09 05:46:52.732456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:51:58.783 [2024-12-09 05:46:52.732469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:58.783 [2024-12-09 05:46:52.732481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b860 is same with the state(6) to be set 00:51:58.783 [2024-12-09 05:46:52.742275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71b860 (9): Bad file descriptor 00:51:58.783 [2024-12-09 05:46:52.752314] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:51:58.783 [2024-12-09 05:46:52.752337] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:51:58.783 [2024-12-09 05:46:52.752347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:51:58.783 [2024-12-09 05:46:52.752355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:51:58.783 [2024-12-09 05:46:52.752399] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:51:59.716 [2024-12-09 05:46:53.817321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:51:59.716 [2024-12-09 05:46:53.817400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71b860 with addr=10.0.0.2, port=4420 00:51:59.716 [2024-12-09 05:46:53.817427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b860 is same with the state(6) to be set 00:51:59.716 [2024-12-09 05:46:53.817480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71b860 (9): Bad file descriptor 00:51:59.716 [2024-12-09 05:46:53.817955] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:51:59.716 [2024-12-09 05:46:53.817999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:51:59.716 [2024-12-09 05:46:53.818015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:51:59.716 [2024-12-09 05:46:53.818031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:51:59.716 [2024-12-09 05:46:53.818045] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:51:59.716 [2024-12-09 05:46:53.818056] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:51:59.716 [2024-12-09 05:46:53.818064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:51:59.716 [2024-12-09 05:46:53.818078] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:51:59.716 [2024-12-09 05:46:53.818087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:51:59.716 05:46:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:52:00.648 [2024-12-09 05:46:54.820589] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:52:00.648 [2024-12-09 05:46:54.820620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:52:00.648 [2024-12-09 05:46:54.820641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:52:00.648 [2024-12-09 05:46:54.820654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:52:00.648 [2024-12-09 05:46:54.820666] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:52:00.649 [2024-12-09 05:46:54.820679] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:52:00.649 [2024-12-09 05:46:54.820689] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:52:00.649 [2024-12-09 05:46:54.820696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:52:00.649 [2024-12-09 05:46:54.820736] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:52:00.649 [2024-12-09 05:46:54.820789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:52:00.649 [2024-12-09 05:46:54.820819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:00.649 [2024-12-09 05:46:54.820838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:52:00.649 [2024-12-09 05:46:54.820853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:00.649 [2024-12-09 05:46:54.820866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:52:00.649 [2024-12-09 05:46:54.820880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:00.649 [2024-12-09 05:46:54.820894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:52:00.649 [2024-12-09 05:46:54.820907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:00.649 [2024-12-09 05:46:54.820922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:52:00.649 [2024-12-09 05:46:54.820935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:00.649 [2024-12-09 05:46:54.820947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:52:00.649 [2024-12-09 05:46:54.820995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70ab50 (9): Bad file descriptor 00:52:00.649 [2024-12-09 05:46:54.821986] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:52:00.649 [2024-12-09 05:46:54.822008] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:52:00.649 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:52:00.906 05:46:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:52:01.835 05:46:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:52:02.765 [2024-12-09 05:46:56.834744] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:52:02.765 [2024-12-09 05:46:56.834783] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:52:02.765 [2024-12-09 05:46:56.834806] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:52:02.765 [2024-12-09 05:46:56.921076] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:52:02.765 [2024-12-09 05:46:56.975782] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:52:02.765 [2024-12-09 05:46:56.976636] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x7262c0:1 started. 00:52:02.765 [2024-12-09 05:46:56.978087] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:52:02.765 [2024-12-09 05:46:56.978131] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:52:02.765 [2024-12-09 05:46:56.978162] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:52:02.765 [2024-12-09 05:46:56.978186] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:52:02.765 [2024-12-09 05:46:56.978200] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:52:02.765 [2024-12-09 05:46:56.983169] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x7262c0 was disconnected and freed. delete nvme_qpair. 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:52:03.023 05:46:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 736316 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 736316 ']' 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 736316 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 736316 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 736316' 00:52:03.023 killing process with pid 736316 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 736316 00:52:03.023 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 736316 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:52:03.280 rmmod nvme_tcp 00:52:03.280 rmmod nvme_fabrics 00:52:03.280 rmmod nvme_keyring 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 736285 ']' 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 736285 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 736285 ']' 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 736285 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 736285 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 736285' 00:52:03.280 killing process with pid 736285 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 736285 00:52:03.280 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 736285 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:52:03.539 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:52:03.540 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:52:03.540 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:52:03.540 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:03.540 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:03.540 05:46:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:52:06.082 00:52:06.082 real 0m17.012s 00:52:06.082 user 0m23.899s 00:52:06.082 sys 0m3.030s 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:52:06.082 ************************************ 00:52:06.082 END TEST nvmf_discovery_remove_ifc 00:52:06.082 ************************************ 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:52:06.082 ************************************ 00:52:06.082 START TEST nvmf_identify_kernel_target 00:52:06.082 ************************************ 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:52:06.082 * Looking for test storage... 00:52:06.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:52:06.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:06.082 --rc genhtml_branch_coverage=1 00:52:06.082 --rc genhtml_function_coverage=1 00:52:06.082 --rc genhtml_legend=1 00:52:06.082 --rc geninfo_all_blocks=1 00:52:06.082 --rc geninfo_unexecuted_blocks=1 00:52:06.082 00:52:06.082 ' 00:52:06.082 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:52:06.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:06.082 --rc genhtml_branch_coverage=1 00:52:06.083 --rc genhtml_function_coverage=1 00:52:06.083 --rc genhtml_legend=1 00:52:06.083 --rc geninfo_all_blocks=1 00:52:06.083 --rc geninfo_unexecuted_blocks=1 00:52:06.083 00:52:06.083 ' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:52:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:06.083 --rc genhtml_branch_coverage=1 00:52:06.083 --rc genhtml_function_coverage=1 00:52:06.083 --rc genhtml_legend=1 00:52:06.083 --rc geninfo_all_blocks=1 00:52:06.083 --rc geninfo_unexecuted_blocks=1 00:52:06.083 00:52:06.083 ' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:52:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:06.083 --rc genhtml_branch_coverage=1 00:52:06.083 --rc genhtml_function_coverage=1 00:52:06.083 --rc genhtml_legend=1 00:52:06.083 --rc geninfo_all_blocks=1 00:52:06.083 --rc geninfo_unexecuted_blocks=1 00:52:06.083 00:52:06.083 ' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:06.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:52:06.083 05:46:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:52:07.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:52:07.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:52:07.985 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:52:07.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:52:07.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:52:07.986 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:52:08.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:08.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:52:08.245 00:52:08.245 --- 10.0.0.2 ping statistics --- 00:52:08.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:08.245 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:08.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:08.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:52:08.245 00:52:08.245 --- 10.0.0.1 ping statistics --- 00:52:08.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:08.245 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:52:08.245 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:52:08.246 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:52:08.246 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:52:08.246 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:52:08.246 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:52:08.246 05:47:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:52:09.181 Waiting for block devices as requested 00:52:09.440 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:52:09.440 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:52:09.699 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:52:09.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:52:09.699 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:52:09.957 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:52:09.957 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:52:09.958 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:52:09.958 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:52:10.216 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:52:10.216 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:52:10.216 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:52:10.216 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:52:10.475 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:52:10.475 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:52:10.475 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:52:10.475 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:52:10.733 No valid GPT data, bailing 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:52:10.733 00:52:10.733 Discovery Log Number of Records 2, Generation counter 2 00:52:10.733 =====Discovery Log Entry 0====== 00:52:10.733 trtype: tcp 00:52:10.733 adrfam: ipv4 00:52:10.733 subtype: current discovery subsystem 00:52:10.733 treq: not specified, sq flow control disable supported 00:52:10.733 portid: 1 00:52:10.733 trsvcid: 4420 00:52:10.733 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:52:10.733 traddr: 10.0.0.1 00:52:10.733 eflags: none 00:52:10.733 sectype: none 00:52:10.733 =====Discovery Log Entry 1====== 00:52:10.733 trtype: tcp 00:52:10.733 adrfam: ipv4 00:52:10.733 subtype: nvme subsystem 00:52:10.733 treq: not specified, sq flow control disable supported 00:52:10.733 portid: 1 00:52:10.733 trsvcid: 4420 00:52:10.733 subnqn: nqn.2016-06.io.spdk:testnqn 00:52:10.733 traddr: 10.0.0.1 00:52:10.733 eflags: none 00:52:10.733 sectype: none 00:52:10.733 05:47:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:52:10.733 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:52:11.001 ===================================================== 00:52:11.001 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:52:11.001 ===================================================== 00:52:11.001 Controller Capabilities/Features 00:52:11.001 ================================ 00:52:11.001 Vendor ID: 0000 00:52:11.001 Subsystem Vendor ID: 0000 00:52:11.001 Serial Number: 9f0df1b0c06959f98066 00:52:11.001 Model Number: Linux 00:52:11.001 Firmware Version: 6.8.9-20 00:52:11.001 Recommended Arb Burst: 0 00:52:11.001 IEEE OUI Identifier: 00 00 00 00:52:11.001 Multi-path I/O 00:52:11.001 May have multiple subsystem ports: No 00:52:11.001 May have multiple controllers: No 00:52:11.001 Associated with SR-IOV VF: No 00:52:11.001 Max Data Transfer Size: Unlimited 00:52:11.001 Max Number of Namespaces: 0 00:52:11.001 Max Number of I/O Queues: 1024 00:52:11.001 NVMe Specification Version (VS): 1.3 00:52:11.001 NVMe Specification Version (Identify): 1.3 00:52:11.001 Maximum Queue Entries: 1024 00:52:11.001 Contiguous Queues Required: No 00:52:11.001 Arbitration Mechanisms Supported 00:52:11.001 Weighted Round Robin: Not Supported 00:52:11.001 Vendor Specific: Not Supported 00:52:11.001 Reset Timeout: 7500 ms 00:52:11.001 Doorbell Stride: 4 bytes 00:52:11.001 NVM Subsystem Reset: Not Supported 00:52:11.001 Command Sets Supported 00:52:11.001 NVM Command Set: Supported 00:52:11.001 Boot Partition: Not Supported 00:52:11.001 Memory Page Size Minimum: 4096 bytes 00:52:11.001 Memory Page Size Maximum: 4096 bytes 00:52:11.001 Persistent Memory Region: Not Supported 00:52:11.001 Optional Asynchronous Events Supported 00:52:11.001 Namespace Attribute Notices: Not Supported 00:52:11.001 Firmware Activation Notices: Not Supported 00:52:11.001 ANA Change Notices: Not Supported 00:52:11.001 PLE Aggregate Log Change Notices: Not Supported 00:52:11.001 LBA Status Info Alert Notices: Not Supported 00:52:11.001 EGE Aggregate Log Change Notices: Not Supported 00:52:11.001 Normal NVM Subsystem Shutdown event: Not Supported 00:52:11.001 Zone Descriptor Change Notices: Not Supported 00:52:11.001 Discovery Log Change Notices: Supported 00:52:11.001 Controller Attributes 00:52:11.001 128-bit Host Identifier: Not Supported 00:52:11.001 Non-Operational Permissive Mode: Not Supported 00:52:11.001 NVM Sets: Not Supported 00:52:11.001 Read Recovery Levels: Not Supported 00:52:11.001 Endurance Groups: Not Supported 00:52:11.001 Predictable Latency Mode: Not Supported 00:52:11.001 Traffic Based Keep ALive: Not Supported 00:52:11.001 Namespace Granularity: Not Supported 00:52:11.001 SQ Associations: Not Supported 00:52:11.001 UUID List: Not Supported 00:52:11.001 Multi-Domain Subsystem: Not Supported 00:52:11.001 Fixed Capacity Management: Not Supported 00:52:11.001 Variable Capacity Management: Not Supported 00:52:11.001 Delete Endurance Group: Not Supported 00:52:11.001 Delete NVM Set: Not Supported 00:52:11.001 Extended LBA Formats Supported: Not Supported 00:52:11.001 Flexible Data Placement Supported: Not Supported 00:52:11.001 00:52:11.001 Controller Memory Buffer Support 00:52:11.001 ================================ 00:52:11.001 Supported: No 00:52:11.001 00:52:11.001 Persistent Memory Region Support 00:52:11.001 ================================ 00:52:11.001 Supported: No 00:52:11.001 00:52:11.001 Admin Command Set Attributes 00:52:11.001 ============================ 00:52:11.001 Security Send/Receive: Not Supported 00:52:11.001 Format NVM: Not Supported 00:52:11.001 Firmware Activate/Download: Not Supported 00:52:11.001 Namespace Management: Not Supported 00:52:11.001 Device Self-Test: Not Supported 00:52:11.001 Directives: Not Supported 00:52:11.001 NVMe-MI: Not Supported 00:52:11.001 Virtualization Management: Not Supported 00:52:11.001 Doorbell Buffer Config: Not Supported 00:52:11.001 Get LBA Status Capability: Not Supported 00:52:11.001 Command & Feature Lockdown Capability: Not Supported 00:52:11.001 Abort Command Limit: 1 00:52:11.001 Async Event Request Limit: 1 00:52:11.001 Number of Firmware Slots: N/A 00:52:11.001 Firmware Slot 1 Read-Only: N/A 00:52:11.001 Firmware Activation Without Reset: N/A 00:52:11.001 Multiple Update Detection Support: N/A 00:52:11.001 Firmware Update Granularity: No Information Provided 00:52:11.001 Per-Namespace SMART Log: No 00:52:11.001 Asymmetric Namespace Access Log Page: Not Supported 00:52:11.001 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:52:11.001 Command Effects Log Page: Not Supported 00:52:11.001 Get Log Page Extended Data: Supported 00:52:11.001 Telemetry Log Pages: Not Supported 00:52:11.001 Persistent Event Log Pages: Not Supported 00:52:11.001 Supported Log Pages Log Page: May Support 00:52:11.001 Commands Supported & Effects Log Page: Not Supported 00:52:11.001 Feature Identifiers & Effects Log Page:May Support 00:52:11.001 NVMe-MI Commands & Effects Log Page: May Support 00:52:11.001 Data Area 4 for Telemetry Log: Not Supported 00:52:11.001 Error Log Page Entries Supported: 1 00:52:11.001 Keep Alive: Not Supported 00:52:11.001 00:52:11.001 NVM Command Set Attributes 00:52:11.002 ========================== 00:52:11.002 Submission Queue Entry Size 00:52:11.002 Max: 1 00:52:11.002 Min: 1 00:52:11.002 Completion Queue Entry Size 00:52:11.002 Max: 1 00:52:11.002 Min: 1 00:52:11.002 Number of Namespaces: 0 00:52:11.002 Compare Command: Not Supported 00:52:11.002 Write Uncorrectable Command: Not Supported 00:52:11.002 Dataset Management Command: Not Supported 00:52:11.002 Write Zeroes Command: Not Supported 00:52:11.002 Set Features Save Field: Not Supported 00:52:11.002 Reservations: Not Supported 00:52:11.002 Timestamp: Not Supported 00:52:11.002 Copy: Not Supported 00:52:11.002 Volatile Write Cache: Not Present 00:52:11.002 Atomic Write Unit (Normal): 1 00:52:11.002 Atomic Write Unit (PFail): 1 00:52:11.002 Atomic Compare & Write Unit: 1 00:52:11.002 Fused Compare & Write: Not Supported 00:52:11.002 Scatter-Gather List 00:52:11.002 SGL Command Set: Supported 00:52:11.002 SGL Keyed: Not Supported 00:52:11.002 SGL Bit Bucket Descriptor: Not Supported 00:52:11.002 SGL Metadata Pointer: Not Supported 00:52:11.002 Oversized SGL: Not Supported 00:52:11.002 SGL Metadata Address: Not Supported 00:52:11.002 SGL Offset: Supported 00:52:11.002 Transport SGL Data Block: Not Supported 00:52:11.002 Replay Protected Memory Block: Not Supported 00:52:11.002 00:52:11.002 Firmware Slot Information 00:52:11.002 ========================= 00:52:11.002 Active slot: 0 00:52:11.002 00:52:11.002 00:52:11.002 Error Log 00:52:11.002 ========= 00:52:11.002 00:52:11.002 Active Namespaces 00:52:11.002 ================= 00:52:11.002 Discovery Log Page 00:52:11.002 ================== 00:52:11.002 Generation Counter: 2 00:52:11.002 Number of Records: 2 00:52:11.002 Record Format: 0 00:52:11.002 00:52:11.002 Discovery Log Entry 0 00:52:11.002 ---------------------- 00:52:11.002 Transport Type: 3 (TCP) 00:52:11.002 Address Family: 1 (IPv4) 00:52:11.002 Subsystem Type: 3 (Current Discovery Subsystem) 00:52:11.002 Entry Flags: 00:52:11.002 Duplicate Returned Information: 0 00:52:11.002 Explicit Persistent Connection Support for Discovery: 0 00:52:11.002 Transport Requirements: 00:52:11.002 Secure Channel: Not Specified 00:52:11.002 Port ID: 1 (0x0001) 00:52:11.002 Controller ID: 65535 (0xffff) 00:52:11.002 Admin Max SQ Size: 32 00:52:11.002 Transport Service Identifier: 4420 00:52:11.002 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:52:11.002 Transport Address: 10.0.0.1 00:52:11.002 Discovery Log Entry 1 00:52:11.002 ---------------------- 00:52:11.002 Transport Type: 3 (TCP) 00:52:11.002 Address Family: 1 (IPv4) 00:52:11.002 Subsystem Type: 2 (NVM Subsystem) 00:52:11.002 Entry Flags: 00:52:11.002 Duplicate Returned Information: 0 00:52:11.002 Explicit Persistent Connection Support for Discovery: 0 00:52:11.002 Transport Requirements: 00:52:11.002 Secure Channel: Not Specified 00:52:11.002 Port ID: 1 (0x0001) 00:52:11.002 Controller ID: 65535 (0xffff) 00:52:11.002 Admin Max SQ Size: 32 00:52:11.002 Transport Service Identifier: 4420 00:52:11.002 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:52:11.002 Transport Address: 10.0.0.1 00:52:11.002 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:52:11.002 get_feature(0x01) failed 00:52:11.002 get_feature(0x02) failed 00:52:11.002 get_feature(0x04) failed 00:52:11.002 ===================================================== 00:52:11.002 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:52:11.002 ===================================================== 00:52:11.002 Controller Capabilities/Features 00:52:11.002 ================================ 00:52:11.002 Vendor ID: 0000 00:52:11.002 Subsystem Vendor ID: 0000 00:52:11.002 Serial Number: 3191ae2e4a90648c065d 00:52:11.002 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:52:11.002 Firmware Version: 6.8.9-20 00:52:11.002 Recommended Arb Burst: 6 00:52:11.002 IEEE OUI Identifier: 00 00 00 00:52:11.002 Multi-path I/O 00:52:11.002 May have multiple subsystem ports: Yes 00:52:11.002 May have multiple controllers: Yes 00:52:11.002 Associated with SR-IOV VF: No 00:52:11.002 Max Data Transfer Size: Unlimited 00:52:11.002 Max Number of Namespaces: 1024 00:52:11.002 Max Number of I/O Queues: 128 00:52:11.002 NVMe Specification Version (VS): 1.3 00:52:11.002 NVMe Specification Version (Identify): 1.3 00:52:11.002 Maximum Queue Entries: 1024 00:52:11.002 Contiguous Queues Required: No 00:52:11.002 Arbitration Mechanisms Supported 00:52:11.002 Weighted Round Robin: Not Supported 00:52:11.002 Vendor Specific: Not Supported 00:52:11.002 Reset Timeout: 7500 ms 00:52:11.002 Doorbell Stride: 4 bytes 00:52:11.002 NVM Subsystem Reset: Not Supported 00:52:11.002 Command Sets Supported 00:52:11.002 NVM Command Set: Supported 00:52:11.002 Boot Partition: Not Supported 00:52:11.002 Memory Page Size Minimum: 4096 bytes 00:52:11.002 Memory Page Size Maximum: 4096 bytes 00:52:11.002 Persistent Memory Region: Not Supported 00:52:11.002 Optional Asynchronous Events Supported 00:52:11.002 Namespace Attribute Notices: Supported 00:52:11.002 Firmware Activation Notices: Not Supported 00:52:11.002 ANA Change Notices: Supported 00:52:11.002 PLE Aggregate Log Change Notices: Not Supported 00:52:11.002 LBA Status Info Alert Notices: Not Supported 00:52:11.002 EGE Aggregate Log Change Notices: Not Supported 00:52:11.002 Normal NVM Subsystem Shutdown event: Not Supported 00:52:11.002 Zone Descriptor Change Notices: Not Supported 00:52:11.002 Discovery Log Change Notices: Not Supported 00:52:11.002 Controller Attributes 00:52:11.002 128-bit Host Identifier: Supported 00:52:11.002 Non-Operational Permissive Mode: Not Supported 00:52:11.002 NVM Sets: Not Supported 00:52:11.002 Read Recovery Levels: Not Supported 00:52:11.002 Endurance Groups: Not Supported 00:52:11.002 Predictable Latency Mode: Not Supported 00:52:11.002 Traffic Based Keep ALive: Supported 00:52:11.002 Namespace Granularity: Not Supported 00:52:11.002 SQ Associations: Not Supported 00:52:11.002 UUID List: Not Supported 00:52:11.002 Multi-Domain Subsystem: Not Supported 00:52:11.002 Fixed Capacity Management: Not Supported 00:52:11.002 Variable Capacity Management: Not Supported 00:52:11.002 Delete Endurance Group: Not Supported 00:52:11.002 Delete NVM Set: Not Supported 00:52:11.002 Extended LBA Formats Supported: Not Supported 00:52:11.002 Flexible Data Placement Supported: Not Supported 00:52:11.002 00:52:11.002 Controller Memory Buffer Support 00:52:11.002 ================================ 00:52:11.002 Supported: No 00:52:11.002 00:52:11.003 Persistent Memory Region Support 00:52:11.003 ================================ 00:52:11.003 Supported: No 00:52:11.003 00:52:11.003 Admin Command Set Attributes 00:52:11.003 ============================ 00:52:11.003 Security Send/Receive: Not Supported 00:52:11.003 Format NVM: Not Supported 00:52:11.003 Firmware Activate/Download: Not Supported 00:52:11.003 Namespace Management: Not Supported 00:52:11.003 Device Self-Test: Not Supported 00:52:11.003 Directives: Not Supported 00:52:11.003 NVMe-MI: Not Supported 00:52:11.003 Virtualization Management: Not Supported 00:52:11.003 Doorbell Buffer Config: Not Supported 00:52:11.003 Get LBA Status Capability: Not Supported 00:52:11.003 Command & Feature Lockdown Capability: Not Supported 00:52:11.003 Abort Command Limit: 4 00:52:11.003 Async Event Request Limit: 4 00:52:11.003 Number of Firmware Slots: N/A 00:52:11.003 Firmware Slot 1 Read-Only: N/A 00:52:11.003 Firmware Activation Without Reset: N/A 00:52:11.003 Multiple Update Detection Support: N/A 00:52:11.003 Firmware Update Granularity: No Information Provided 00:52:11.003 Per-Namespace SMART Log: Yes 00:52:11.003 Asymmetric Namespace Access Log Page: Supported 00:52:11.003 ANA Transition Time : 10 sec 00:52:11.003 00:52:11.003 Asymmetric Namespace Access Capabilities 00:52:11.003 ANA Optimized State : Supported 00:52:11.003 ANA Non-Optimized State : Supported 00:52:11.003 ANA Inaccessible State : Supported 00:52:11.003 ANA Persistent Loss State : Supported 00:52:11.003 ANA Change State : Supported 00:52:11.003 ANAGRPID is not changed : No 00:52:11.003 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:52:11.003 00:52:11.003 ANA Group Identifier Maximum : 128 00:52:11.003 Number of ANA Group Identifiers : 128 00:52:11.003 Max Number of Allowed Namespaces : 1024 00:52:11.003 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:52:11.003 Command Effects Log Page: Supported 00:52:11.003 Get Log Page Extended Data: Supported 00:52:11.003 Telemetry Log Pages: Not Supported 00:52:11.003 Persistent Event Log Pages: Not Supported 00:52:11.003 Supported Log Pages Log Page: May Support 00:52:11.003 Commands Supported & Effects Log Page: Not Supported 00:52:11.003 Feature Identifiers & Effects Log Page:May Support 00:52:11.003 NVMe-MI Commands & Effects Log Page: May Support 00:52:11.003 Data Area 4 for Telemetry Log: Not Supported 00:52:11.003 Error Log Page Entries Supported: 128 00:52:11.003 Keep Alive: Supported 00:52:11.003 Keep Alive Granularity: 1000 ms 00:52:11.003 00:52:11.003 NVM Command Set Attributes 00:52:11.003 ========================== 00:52:11.003 Submission Queue Entry Size 00:52:11.003 Max: 64 00:52:11.003 Min: 64 00:52:11.003 Completion Queue Entry Size 00:52:11.003 Max: 16 00:52:11.003 Min: 16 00:52:11.003 Number of Namespaces: 1024 00:52:11.003 Compare Command: Not Supported 00:52:11.003 Write Uncorrectable Command: Not Supported 00:52:11.003 Dataset Management Command: Supported 00:52:11.003 Write Zeroes Command: Supported 00:52:11.003 Set Features Save Field: Not Supported 00:52:11.003 Reservations: Not Supported 00:52:11.003 Timestamp: Not Supported 00:52:11.003 Copy: Not Supported 00:52:11.003 Volatile Write Cache: Present 00:52:11.003 Atomic Write Unit (Normal): 1 00:52:11.003 Atomic Write Unit (PFail): 1 00:52:11.003 Atomic Compare & Write Unit: 1 00:52:11.003 Fused Compare & Write: Not Supported 00:52:11.003 Scatter-Gather List 00:52:11.003 SGL Command Set: Supported 00:52:11.003 SGL Keyed: Not Supported 00:52:11.003 SGL Bit Bucket Descriptor: Not Supported 00:52:11.003 SGL Metadata Pointer: Not Supported 00:52:11.003 Oversized SGL: Not Supported 00:52:11.003 SGL Metadata Address: Not Supported 00:52:11.003 SGL Offset: Supported 00:52:11.003 Transport SGL Data Block: Not Supported 00:52:11.003 Replay Protected Memory Block: Not Supported 00:52:11.003 00:52:11.003 Firmware Slot Information 00:52:11.003 ========================= 00:52:11.003 Active slot: 0 00:52:11.003 00:52:11.003 Asymmetric Namespace Access 00:52:11.003 =========================== 00:52:11.003 Change Count : 0 00:52:11.003 Number of ANA Group Descriptors : 1 00:52:11.003 ANA Group Descriptor : 0 00:52:11.003 ANA Group ID : 1 00:52:11.003 Number of NSID Values : 1 00:52:11.003 Change Count : 0 00:52:11.003 ANA State : 1 00:52:11.003 Namespace Identifier : 1 00:52:11.003 00:52:11.003 Commands Supported and Effects 00:52:11.003 ============================== 00:52:11.003 Admin Commands 00:52:11.003 -------------- 00:52:11.003 Get Log Page (02h): Supported 00:52:11.003 Identify (06h): Supported 00:52:11.003 Abort (08h): Supported 00:52:11.003 Set Features (09h): Supported 00:52:11.003 Get Features (0Ah): Supported 00:52:11.003 Asynchronous Event Request (0Ch): Supported 00:52:11.003 Keep Alive (18h): Supported 00:52:11.003 I/O Commands 00:52:11.003 ------------ 00:52:11.003 Flush (00h): Supported 00:52:11.003 Write (01h): Supported LBA-Change 00:52:11.003 Read (02h): Supported 00:52:11.003 Write Zeroes (08h): Supported LBA-Change 00:52:11.003 Dataset Management (09h): Supported 00:52:11.003 00:52:11.003 Error Log 00:52:11.003 ========= 00:52:11.003 Entry: 0 00:52:11.003 Error Count: 0x3 00:52:11.003 Submission Queue Id: 0x0 00:52:11.003 Command Id: 0x5 00:52:11.003 Phase Bit: 0 00:52:11.003 Status Code: 0x2 00:52:11.003 Status Code Type: 0x0 00:52:11.003 Do Not Retry: 1 00:52:11.264 Error Location: 0x28 00:52:11.264 LBA: 0x0 00:52:11.264 Namespace: 0x0 00:52:11.264 Vendor Log Page: 0x0 00:52:11.264 ----------- 00:52:11.264 Entry: 1 00:52:11.264 Error Count: 0x2 00:52:11.264 Submission Queue Id: 0x0 00:52:11.264 Command Id: 0x5 00:52:11.264 Phase Bit: 0 00:52:11.264 Status Code: 0x2 00:52:11.264 Status Code Type: 0x0 00:52:11.264 Do Not Retry: 1 00:52:11.264 Error Location: 0x28 00:52:11.264 LBA: 0x0 00:52:11.264 Namespace: 0x0 00:52:11.264 Vendor Log Page: 0x0 00:52:11.264 ----------- 00:52:11.264 Entry: 2 00:52:11.264 Error Count: 0x1 00:52:11.264 Submission Queue Id: 0x0 00:52:11.264 Command Id: 0x4 00:52:11.264 Phase Bit: 0 00:52:11.264 Status Code: 0x2 00:52:11.264 Status Code Type: 0x0 00:52:11.264 Do Not Retry: 1 00:52:11.264 Error Location: 0x28 00:52:11.264 LBA: 0x0 00:52:11.264 Namespace: 0x0 00:52:11.264 Vendor Log Page: 0x0 00:52:11.264 00:52:11.264 Number of Queues 00:52:11.264 ================ 00:52:11.264 Number of I/O Submission Queues: 128 00:52:11.264 Number of I/O Completion Queues: 128 00:52:11.264 00:52:11.264 ZNS Specific Controller Data 00:52:11.264 ============================ 00:52:11.264 Zone Append Size Limit: 0 00:52:11.264 00:52:11.264 00:52:11.264 Active Namespaces 00:52:11.264 ================= 00:52:11.264 get_feature(0x05) failed 00:52:11.264 Namespace ID:1 00:52:11.264 Command Set Identifier: NVM (00h) 00:52:11.264 Deallocate: Supported 00:52:11.264 Deallocated/Unwritten Error: Not Supported 00:52:11.264 Deallocated Read Value: Unknown 00:52:11.264 Deallocate in Write Zeroes: Not Supported 00:52:11.264 Deallocated Guard Field: 0xFFFF 00:52:11.264 Flush: Supported 00:52:11.264 Reservation: Not Supported 00:52:11.264 Namespace Sharing Capabilities: Multiple Controllers 00:52:11.264 Size (in LBAs): 1953525168 (931GiB) 00:52:11.264 Capacity (in LBAs): 1953525168 (931GiB) 00:52:11.264 Utilization (in LBAs): 1953525168 (931GiB) 00:52:11.264 UUID: 4c0f1478-5739-4920-a49b-5e7910acf08e 00:52:11.264 Thin Provisioning: Not Supported 00:52:11.264 Per-NS Atomic Units: Yes 00:52:11.264 Atomic Boundary Size (Normal): 0 00:52:11.264 Atomic Boundary Size (PFail): 0 00:52:11.264 Atomic Boundary Offset: 0 00:52:11.264 NGUID/EUI64 Never Reused: No 00:52:11.264 ANA group ID: 1 00:52:11.264 Namespace Write Protected: No 00:52:11.264 Number of LBA Formats: 1 00:52:11.264 Current LBA Format: LBA Format #00 00:52:11.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:52:11.264 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:52:11.264 rmmod nvme_tcp 00:52:11.264 rmmod nvme_fabrics 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:11.264 05:47:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:52:13.210 05:47:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:52:14.584 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:52:14.584 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:52:14.584 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:52:15.523 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:52:15.782 00:52:15.782 real 0m9.999s 00:52:15.782 user 0m2.257s 00:52:15.782 sys 0m3.752s 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:52:15.782 ************************************ 00:52:15.782 END TEST nvmf_identify_kernel_target 00:52:15.782 ************************************ 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:52:15.782 ************************************ 00:52:15.782 START TEST nvmf_auth_host 00:52:15.782 ************************************ 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:52:15.782 * Looking for test storage... 00:52:15.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:52:15.782 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:52:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:15.783 --rc genhtml_branch_coverage=1 00:52:15.783 --rc genhtml_function_coverage=1 00:52:15.783 --rc genhtml_legend=1 00:52:15.783 --rc geninfo_all_blocks=1 00:52:15.783 --rc geninfo_unexecuted_blocks=1 00:52:15.783 00:52:15.783 ' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:52:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:15.783 --rc genhtml_branch_coverage=1 00:52:15.783 --rc genhtml_function_coverage=1 00:52:15.783 --rc genhtml_legend=1 00:52:15.783 --rc geninfo_all_blocks=1 00:52:15.783 --rc geninfo_unexecuted_blocks=1 00:52:15.783 00:52:15.783 ' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:52:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:15.783 --rc genhtml_branch_coverage=1 00:52:15.783 --rc genhtml_function_coverage=1 00:52:15.783 --rc genhtml_legend=1 00:52:15.783 --rc geninfo_all_blocks=1 00:52:15.783 --rc geninfo_unexecuted_blocks=1 00:52:15.783 00:52:15.783 ' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:52:15.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:15.783 --rc genhtml_branch_coverage=1 00:52:15.783 --rc genhtml_function_coverage=1 00:52:15.783 --rc genhtml_legend=1 00:52:15.783 --rc geninfo_all_blocks=1 00:52:15.783 --rc geninfo_unexecuted_blocks=1 00:52:15.783 00:52:15.783 ' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:15.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:52:15.783 05:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:15.783 05:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:52:15.783 05:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:52:15.783 05:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:52:15.783 05:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:52:18.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:52:18.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:52:18.317 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:52:18.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:52:18.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:52:18.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:18.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:52:18.318 00:52:18.318 --- 10.0.0.2 ping statistics --- 00:52:18.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:18.318 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:18.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:18.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:52:18.318 00:52:18.318 --- 10.0.0.1 ping statistics --- 00:52:18.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:18.318 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=743393 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 743393 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 743393 ']' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:18.318 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=486b3eeb2c4239f539d256641edf3ea8 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rJS 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 486b3eeb2c4239f539d256641edf3ea8 0 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 486b3eeb2c4239f539d256641edf3ea8 0 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=486b3eeb2c4239f539d256641edf3ea8 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rJS 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rJS 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rJS 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.575 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=73fd7287f9772e72712a8650f71cd1260ec2bc5a9601a23f4cc90979e578c81a 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Uxg 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 73fd7287f9772e72712a8650f71cd1260ec2bc5a9601a23f4cc90979e578c81a 3 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 73fd7287f9772e72712a8650f71cd1260ec2bc5a9601a23f4cc90979e578c81a 3 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=73fd7287f9772e72712a8650f71cd1260ec2bc5a9601a23f4cc90979e578c81a 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Uxg 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Uxg 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Uxg 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7992fffcff18e0ba5633f972e2203c2729b5a5caf6742e69 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nSp 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7992fffcff18e0ba5633f972e2203c2729b5a5caf6742e69 0 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7992fffcff18e0ba5633f972e2203c2729b5a5caf6742e69 0 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7992fffcff18e0ba5633f972e2203c2729b5a5caf6742e69 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nSp 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nSp 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nSp 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b69f03a8b6e197ff608373ce777d317869a040d596f793af 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dqH 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b69f03a8b6e197ff608373ce777d317869a040d596f793af 2 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b69f03a8b6e197ff608373ce777d317869a040d596f793af 2 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b69f03a8b6e197ff608373ce777d317869a040d596f793af 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dqH 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dqH 00:52:18.576 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dqH 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=224ddb90c903c62ffb96fb9fc6f09cad 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yH6 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 224ddb90c903c62ffb96fb9fc6f09cad 1 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 224ddb90c903c62ffb96fb9fc6f09cad 1 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=224ddb90c903c62ffb96fb9fc6f09cad 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yH6 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yH6 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yH6 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:52:18.834 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=710f26e10a5869c6cb2e29c6de144c24 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UIC 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 710f26e10a5869c6cb2e29c6de144c24 1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 710f26e10a5869c6cb2e29c6de144c24 1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=710f26e10a5869c6cb2e29c6de144c24 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UIC 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UIC 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UIC 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a0d13a4e317edc09952e36645ffae50c93a1c445d114ba32 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0zF 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a0d13a4e317edc09952e36645ffae50c93a1c445d114ba32 2 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a0d13a4e317edc09952e36645ffae50c93a1c445d114ba32 2 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a0d13a4e317edc09952e36645ffae50c93a1c445d114ba32 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0zF 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0zF 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0zF 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=42a82bbd9a9112685fb63162c45eda21 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oCP 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 42a82bbd9a9112685fb63162c45eda21 0 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 42a82bbd9a9112685fb63162c45eda21 0 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=42a82bbd9a9112685fb63162c45eda21 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:52:18.835 05:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oCP 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oCP 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.oCP 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ccead0d9553bddd5e795d4d6a288dcc26dc84579381cf0f06c9eb19f619167bb 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NeS 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ccead0d9553bddd5e795d4d6a288dcc26dc84579381cf0f06c9eb19f619167bb 3 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ccead0d9553bddd5e795d4d6a288dcc26dc84579381cf0f06c9eb19f619167bb 3 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ccead0d9553bddd5e795d4d6a288dcc26dc84579381cf0f06c9eb19f619167bb 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:52:18.835 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NeS 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NeS 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NeS 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 743393 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 743393 ']' 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:19.093 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rJS 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Uxg ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Uxg 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nSp 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dqH ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dqH 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yH6 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UIC ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UIC 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0zF 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.oCP ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.oCP 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NeS 00:52:19.352 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:52:19.353 05:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:52:20.287 Waiting for block devices as requested 00:52:20.287 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:52:20.287 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:52:20.545 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:52:20.545 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:52:20.803 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:52:20.803 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:52:20.803 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:52:20.803 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:52:21.061 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:52:21.061 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:52:21.061 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:52:21.061 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:52:21.318 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:52:21.319 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:52:21.319 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:52:21.319 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:52:21.575 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:52:21.833 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:52:22.091 No valid GPT data, bailing 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:52:22.091 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:52:22.091 00:52:22.091 Discovery Log Number of Records 2, Generation counter 2 00:52:22.091 =====Discovery Log Entry 0====== 00:52:22.091 trtype: tcp 00:52:22.091 adrfam: ipv4 00:52:22.091 subtype: current discovery subsystem 00:52:22.091 treq: not specified, sq flow control disable supported 00:52:22.091 portid: 1 00:52:22.091 trsvcid: 4420 00:52:22.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:52:22.091 traddr: 10.0.0.1 00:52:22.091 eflags: none 00:52:22.091 sectype: none 00:52:22.091 =====Discovery Log Entry 1====== 00:52:22.091 trtype: tcp 00:52:22.091 adrfam: ipv4 00:52:22.091 subtype: nvme subsystem 00:52:22.091 treq: not specified, sq flow control disable supported 00:52:22.091 portid: 1 00:52:22.091 trsvcid: 4420 00:52:22.091 subnqn: nqn.2024-02.io.spdk:cnode0 00:52:22.091 traddr: 10.0.0.1 00:52:22.091 eflags: none 00:52:22.091 sectype: none 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.092 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.350 nvme0n1 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.350 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.607 nvme0n1 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:22.607 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.608 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.865 nvme0n1 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:22.865 05:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.122 nvme0n1 00:52:23.122 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.123 nvme0n1 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.123 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.380 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.381 nvme0n1 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.381 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.639 nvme0n1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.639 05:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.897 nvme0n1 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:23.897 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.155 nvme0n1 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.155 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.413 nvme0n1 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.413 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.671 nvme0n1 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.671 05:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:24.928 nvme0n1 00:52:24.928 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:24.928 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:24.928 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:24.928 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:24.928 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.186 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.444 nvme0n1 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.444 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.702 nvme0n1 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.702 05:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.960 nvme0n1 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:25.960 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.218 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.477 nvme0n1 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.477 05:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.045 nvme0n1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.045 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.611 nvme0n1 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:27.611 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.612 05:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.176 nvme0n1 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.176 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.741 nvme0n1 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:28.741 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:28.742 05:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:29.309 nvme0n1 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:29.309 05:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:30.242 nvme0n1 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:52:30.242 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:30.243 05:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.178 nvme0n1 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.178 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.742 nvme0n1 00:52:31.742 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.742 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:31.742 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:31.742 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.742 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.999 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.999 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:31.999 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:31.999 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.999 05:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:31.999 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:32.931 nvme0n1 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:32.931 05:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 nvme0n1 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 nvme0n1 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:33.865 05:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:33.865 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.122 nvme0n1 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.122 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.380 nvme0n1 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.380 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.637 nvme0n1 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.637 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.908 nvme0n1 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:34.908 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:34.909 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:34.909 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:34.909 05:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.167 nvme0n1 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:35.167 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.168 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.472 nvme0n1 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:35.472 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.473 nvme0n1 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.473 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:35.773 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.774 nvme0n1 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:35.774 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.031 05:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.031 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.031 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:36.031 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:36.031 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.032 nvme0n1 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.032 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.290 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.547 nvme0n1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.548 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.807 nvme0n1 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:36.807 05:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.064 nvme0n1 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:37.064 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.322 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.581 nvme0n1 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.581 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.840 nvme0n1 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:37.840 05:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.406 nvme0n1 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:38.406 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.407 05:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.972 nvme0n1 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.972 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:38.973 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:39.539 nvme0n1 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:52:39.539 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:39.540 05:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.106 nvme0n1 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.106 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.672 nvme0n1 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:40.672 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:40.673 05:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:41.604 nvme0n1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:41.604 05:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:42.537 nvme0n1 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:42.537 05:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:43.472 nvme0n1 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:43.472 05:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:44.404 nvme0n1 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:44.404 05:47:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.335 nvme0n1 00:52:45.335 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.335 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 nvme0n1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.336 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.593 nvme0n1 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:45.593 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.594 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.851 nvme0n1 00:52:45.851 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:45.852 05:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.110 nvme0n1 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:52:46.110 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.111 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.368 nvme0n1 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:46.368 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.369 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.626 nvme0n1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.626 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.883 nvme0n1 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:46.883 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:46.884 05:47:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.142 nvme0n1 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.142 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.399 nvme0n1 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.399 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.400 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.659 nvme0n1 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.659 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.918 nvme0n1 00:52:47.918 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.918 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:47.918 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:47.918 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.918 05:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:47.918 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.176 nvme0n1 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.176 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.434 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.435 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.693 nvme0n1 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.693 05:47:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.951 nvme0n1 00:52:48.951 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.951 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:48.951 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:48.952 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.210 nvme0n1 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:49.210 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.211 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.777 nvme0n1 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:49.777 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:49.778 05:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.343 nvme0n1 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.343 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.909 nvme0n1 00:52:50.909 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.909 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:50.909 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.909 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:50.909 05:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.909 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.909 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.910 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:51.473 nvme0n1 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:51.473 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:51.474 05:47:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.048 nvme0n1 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:52.048 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg2YjNlZWIyYzQyMzlmNTM5ZDI1NjY0MWVkZjNlYTjo7+hd: 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzNmZDcyODdmOTc3MmU3MjcxMmE4NjUwZjcxY2QxMjYwZWMyYmM1YTk2MDFhMjNmNGNjOTA5NzllNTc4YzgxYUESTis=: 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.049 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.982 nvme0n1 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.982 05:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:52.983 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:53.915 nvme0n1 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:53.915 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:53.916 05:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:54.848 nvme0n1 00:52:54.848 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.848 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTBkMTNhNGUzMTdlZGMwOTk1MmUzNjY0NWZmYWU1MGM5M2ExYzQ0NWQxMTRiYTMybDuXAA==: 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJhODJiYmQ5YTkxMTI2ODVmYjYzMTYyYzQ1ZWRhMjFIPJC1: 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:54.849 05:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:55.780 nvme0n1 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.780 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2NlYWQwZDk1NTNiZGRkNWU3OTVkNGQ2YTI4OGRjYzI2ZGM4NDU3OTM4MWNmMGYwNmM5ZWIxOWY2MTkxNjdiYqCvkoQ=: 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:55.781 05:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 nvme0n1 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 request: 00:52:56.713 { 00:52:56.713 "name": "nvme0", 00:52:56.713 "trtype": "tcp", 00:52:56.713 "traddr": "10.0.0.1", 00:52:56.713 "adrfam": "ipv4", 00:52:56.713 "trsvcid": "4420", 00:52:56.713 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:56.713 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:56.713 "prchk_reftag": false, 00:52:56.713 "prchk_guard": false, 00:52:56.713 "hdgst": false, 00:52:56.713 "ddgst": false, 00:52:56.713 "allow_unrecognized_csi": false, 00:52:56.713 "method": "bdev_nvme_attach_controller", 00:52:56.713 "req_id": 1 00:52:56.713 } 00:52:56.713 Got JSON-RPC error response 00:52:56.713 response: 00:52:56.713 { 00:52:56.713 "code": -5, 00:52:56.713 "message": "Input/output error" 00:52:56.713 } 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.713 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.714 request: 00:52:56.714 { 00:52:56.714 "name": "nvme0", 00:52:56.714 "trtype": "tcp", 00:52:56.714 "traddr": "10.0.0.1", 00:52:56.714 "adrfam": "ipv4", 00:52:56.714 "trsvcid": "4420", 00:52:56.714 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:56.714 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:56.714 "prchk_reftag": false, 00:52:56.714 "prchk_guard": false, 00:52:56.714 "hdgst": false, 00:52:56.714 "ddgst": false, 00:52:56.714 "dhchap_key": "key2", 00:52:56.714 "allow_unrecognized_csi": false, 00:52:56.714 "method": "bdev_nvme_attach_controller", 00:52:56.714 "req_id": 1 00:52:56.714 } 00:52:56.714 Got JSON-RPC error response 00:52:56.714 response: 00:52:56.714 { 00:52:56.714 "code": -5, 00:52:56.714 "message": "Input/output error" 00:52:56.714 } 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.714 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:56.972 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:56.973 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.973 05:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.973 request: 00:52:56.973 { 00:52:56.973 "name": "nvme0", 00:52:56.973 "trtype": "tcp", 00:52:56.973 "traddr": "10.0.0.1", 00:52:56.973 "adrfam": "ipv4", 00:52:56.973 "trsvcid": "4420", 00:52:56.973 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:52:56.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:52:56.973 "prchk_reftag": false, 00:52:56.973 "prchk_guard": false, 00:52:56.973 "hdgst": false, 00:52:56.973 "ddgst": false, 00:52:56.973 "dhchap_key": "key1", 00:52:56.973 "dhchap_ctrlr_key": "ckey2", 00:52:56.973 "allow_unrecognized_csi": false, 00:52:56.973 "method": "bdev_nvme_attach_controller", 00:52:56.973 "req_id": 1 00:52:56.973 } 00:52:56.973 Got JSON-RPC error response 00:52:56.973 response: 00:52:56.973 { 00:52:56.973 "code": -5, 00:52:56.973 "message": "Input/output error" 00:52:56.973 } 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:56.973 nvme0n1 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:56.973 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:57.231 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:57.232 request: 00:52:57.232 { 00:52:57.232 "name": "nvme0", 00:52:57.232 "dhchap_key": "key1", 00:52:57.232 "dhchap_ctrlr_key": "ckey2", 00:52:57.232 "method": "bdev_nvme_set_keys", 00:52:57.232 "req_id": 1 00:52:57.232 } 00:52:57.232 Got JSON-RPC error response 00:52:57.232 response: 00:52:57.232 { 00:52:57.232 "code": -13, 00:52:57.232 "message": "Permission denied" 00:52:57.232 } 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:52:57.232 05:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MmZmZmNmZjE4ZTBiYTU2MzNmOTcyZTIyMDNjMjcyOWI1YTVjYWY2NzQyZTY5dtjAfw==: 00:52:58.604 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjY5ZjAzYThiNmUxOTdmZjYwODM3M2NlNzc3ZDMxNzg2OWEwNDBkNTk2Zjc5M2Fm+u4Tyw==: 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:58.605 nvme0n1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjI0ZGRiOTBjOTAzYzYyZmZiOTZmYjlmYzZmMDljYWT/9vbd: 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzEwZjI2ZTEwYTU4NjljNmNiMmUyOWM2ZGUxNDRjMjR3n/hy: 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:58.605 request: 00:52:58.605 { 00:52:58.605 "name": "nvme0", 00:52:58.605 "dhchap_key": "key2", 00:52:58.605 "dhchap_ctrlr_key": "ckey1", 00:52:58.605 "method": "bdev_nvme_set_keys", 00:52:58.605 "req_id": 1 00:52:58.605 } 00:52:58.605 Got JSON-RPC error response 00:52:58.605 response: 00:52:58.605 { 00:52:58.605 "code": -13, 00:52:58.605 "message": "Permission denied" 00:52:58.605 } 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:52:58.605 05:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:52:59.538 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:52:59.538 rmmod nvme_tcp 00:52:59.796 rmmod nvme_fabrics 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 743393 ']' 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 743393 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 743393 ']' 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 743393 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 743393 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 743393' 00:52:59.796 killing process with pid 743393 00:52:59.796 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 743393 00:52:59.797 05:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 743393 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:00.055 05:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:53:01.955 05:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:53:03.328 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:53:03.328 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:53:03.328 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:53:04.266 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:53:04.266 05:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rJS /tmp/spdk.key-null.nSp /tmp/spdk.key-sha256.yH6 /tmp/spdk.key-sha384.0zF /tmp/spdk.key-sha512.NeS /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:53:04.524 05:47:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:53:05.459 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:53:05.459 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:53:05.459 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:53:05.459 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:53:05.459 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:53:05.459 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:53:05.459 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:53:05.459 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:53:05.459 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:53:05.459 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:53:05.459 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:53:05.459 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:53:05.459 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:53:05.459 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:53:05.459 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:53:05.459 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:53:05.459 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:53:05.718 00:53:05.718 real 0m49.924s 00:53:05.718 user 0m47.660s 00:53:05.718 sys 0m6.017s 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:53:05.718 ************************************ 00:53:05.718 END TEST nvmf_auth_host 00:53:05.718 ************************************ 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:05.718 ************************************ 00:53:05.718 START TEST nvmf_digest 00:53:05.718 ************************************ 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:53:05.718 * Looking for test storage... 00:53:05.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:53:05.718 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.978 --rc genhtml_branch_coverage=1 00:53:05.978 --rc genhtml_function_coverage=1 00:53:05.978 --rc genhtml_legend=1 00:53:05.978 --rc geninfo_all_blocks=1 00:53:05.978 --rc geninfo_unexecuted_blocks=1 00:53:05.978 00:53:05.978 ' 00:53:05.978 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.978 --rc genhtml_branch_coverage=1 00:53:05.978 --rc genhtml_function_coverage=1 00:53:05.978 --rc genhtml_legend=1 00:53:05.978 --rc geninfo_all_blocks=1 00:53:05.978 --rc geninfo_unexecuted_blocks=1 00:53:05.978 00:53:05.978 ' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:05.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.979 --rc genhtml_branch_coverage=1 00:53:05.979 --rc genhtml_function_coverage=1 00:53:05.979 --rc genhtml_legend=1 00:53:05.979 --rc geninfo_all_blocks=1 00:53:05.979 --rc geninfo_unexecuted_blocks=1 00:53:05.979 00:53:05.979 ' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:05.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.979 --rc genhtml_branch_coverage=1 00:53:05.979 --rc genhtml_function_coverage=1 00:53:05.979 --rc genhtml_legend=1 00:53:05.979 --rc geninfo_all_blocks=1 00:53:05.979 --rc geninfo_unexecuted_blocks=1 00:53:05.979 00:53:05.979 ' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:05.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:53:05.979 05:47:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:53:07.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:53:07.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:07.879 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:53:07.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:53:07.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:53:07.880 05:48:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:53:07.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:07.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:53:07.880 00:53:07.880 --- 10.0.0.2 ping statistics --- 00:53:07.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.880 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:53:07.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:07.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:53:07.880 00:53:07.880 --- 10.0.0.1 ping statistics --- 00:53:07.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:07.880 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:07.880 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:53:08.140 ************************************ 00:53:08.140 START TEST nvmf_digest_clean 00:53:08.140 ************************************ 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=752976 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 752976 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 752976 ']' 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:08.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:08.140 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:08.140 [2024-12-09 05:48:02.189412] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:08.140 [2024-12-09 05:48:02.189496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:08.140 [2024-12-09 05:48:02.264357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:08.140 [2024-12-09 05:48:02.321324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:08.140 [2024-12-09 05:48:02.321379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:08.140 [2024-12-09 05:48:02.321392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:08.140 [2024-12-09 05:48:02.321403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:08.140 [2024-12-09 05:48:02.321413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:08.140 [2024-12-09 05:48:02.321945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:08.396 null0 00:53:08.396 [2024-12-09 05:48:02.555226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:08.396 [2024-12-09 05:48:02.579502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:53:08.396 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=752999 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 752999 /var/tmp/bperf.sock 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 752999 ']' 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:08.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:08.397 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:08.653 [2024-12-09 05:48:02.627505] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:08.653 [2024-12-09 05:48:02.627588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752999 ] 00:53:08.653 [2024-12-09 05:48:02.693711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:08.653 [2024-12-09 05:48:02.750334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:08.653 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:08.653 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:53:08.653 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:53:08.653 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:53:08.653 05:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:53:09.217 05:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:09.217 05:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:09.474 nvme0n1 00:53:09.474 05:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:53:09.474 05:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:09.732 Running I/O for 2 seconds... 00:53:11.595 18204.00 IOPS, 71.11 MiB/s [2024-12-09T04:48:05.820Z] 18266.50 IOPS, 71.35 MiB/s 00:53:11.595 Latency(us) 00:53:11.595 [2024-12-09T04:48:05.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:11.595 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:53:11.595 nvme0n1 : 2.01 18275.48 71.39 0.00 0.00 6993.71 3373.89 16699.54 00:53:11.595 [2024-12-09T04:48:05.820Z] =================================================================================================================== 00:53:11.595 [2024-12-09T04:48:05.820Z] Total : 18275.48 71.39 0.00 0.00 6993.71 3373.89 16699.54 00:53:11.595 { 00:53:11.596 "results": [ 00:53:11.596 { 00:53:11.596 "job": "nvme0n1", 00:53:11.596 "core_mask": "0x2", 00:53:11.596 "workload": "randread", 00:53:11.596 "status": "finished", 00:53:11.596 "queue_depth": 128, 00:53:11.596 "io_size": 4096, 00:53:11.596 "runtime": 2.008046, 00:53:11.596 "iops": 18275.477752999683, 00:53:11.596 "mibps": 71.38858497265501, 00:53:11.596 "io_failed": 0, 00:53:11.596 "io_timeout": 0, 00:53:11.596 "avg_latency_us": 6993.714196716746, 00:53:11.596 "min_latency_us": 3373.8903703703704, 00:53:11.596 "max_latency_us": 16699.543703703705 00:53:11.596 } 00:53:11.596 ], 00:53:11.596 "core_count": 1 00:53:11.596 } 00:53:11.596 05:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:53:11.596 05:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:53:11.596 05:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:53:11.596 05:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:53:11.596 | select(.opcode=="crc32c") 00:53:11.596 | "\(.module_name) \(.executed)"' 00:53:11.596 05:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 752999 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 752999 ']' 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 752999 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 752999 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 752999' 00:53:11.853 killing process with pid 752999 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 752999 00:53:11.853 Received shutdown signal, test time was about 2.000000 seconds 00:53:11.853 00:53:11.853 Latency(us) 00:53:11.853 [2024-12-09T04:48:06.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:11.853 [2024-12-09T04:48:06.078Z] =================================================================================================================== 00:53:11.853 [2024-12-09T04:48:06.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:11.853 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 752999 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=753425 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 753425 /var/tmp/bperf.sock 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 753425 ']' 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:12.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:12.112 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:12.113 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:12.369 [2024-12-09 05:48:06.372586] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:12.369 [2024-12-09 05:48:06.372672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753425 ] 00:53:12.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:12.369 Zero copy mechanism will not be used. 00:53:12.370 [2024-12-09 05:48:06.439718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:12.370 [2024-12-09 05:48:06.498807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:12.627 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:12.627 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:53:12.627 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:53:12.627 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:53:12.627 05:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:53:12.885 05:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:12.885 05:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:13.450 nvme0n1 00:53:13.450 05:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:53:13.450 05:48:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:13.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:13.708 Zero copy mechanism will not be used. 00:53:13.708 Running I/O for 2 seconds... 00:53:15.573 5755.00 IOPS, 719.38 MiB/s [2024-12-09T04:48:09.798Z] 5741.50 IOPS, 717.69 MiB/s 00:53:15.573 Latency(us) 00:53:15.573 [2024-12-09T04:48:09.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:15.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:53:15.574 nvme0n1 : 2.00 5739.66 717.46 0.00 0.00 2783.66 752.45 8641.04 00:53:15.574 [2024-12-09T04:48:09.799Z] =================================================================================================================== 00:53:15.574 [2024-12-09T04:48:09.799Z] Total : 5739.66 717.46 0.00 0.00 2783.66 752.45 8641.04 00:53:15.574 { 00:53:15.574 "results": [ 00:53:15.574 { 00:53:15.574 "job": "nvme0n1", 00:53:15.574 "core_mask": "0x2", 00:53:15.574 "workload": "randread", 00:53:15.574 "status": "finished", 00:53:15.574 "queue_depth": 16, 00:53:15.574 "io_size": 131072, 00:53:15.574 "runtime": 2.003428, 00:53:15.574 "iops": 5739.662218956708, 00:53:15.574 "mibps": 717.4577773695885, 00:53:15.574 "io_failed": 0, 00:53:15.574 "io_timeout": 0, 00:53:15.574 "avg_latency_us": 2783.6626978835516, 00:53:15.574 "min_latency_us": 752.4503703703704, 00:53:15.574 "max_latency_us": 8641.042962962963 00:53:15.574 } 00:53:15.574 ], 00:53:15.574 "core_count": 1 00:53:15.574 } 00:53:15.574 05:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:53:15.574 05:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:53:15.574 05:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:53:15.574 05:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:53:15.574 05:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:53:15.574 | select(.opcode=="crc32c") 00:53:15.574 | "\(.module_name) \(.executed)"' 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 753425 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 753425 ']' 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 753425 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 753425 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 753425' 00:53:15.831 killing process with pid 753425 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 753425 00:53:15.831 Received shutdown signal, test time was about 2.000000 seconds 00:53:15.831 00:53:15.831 Latency(us) 00:53:15.831 [2024-12-09T04:48:10.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:15.831 [2024-12-09T04:48:10.056Z] =================================================================================================================== 00:53:15.831 [2024-12-09T04:48:10.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:15.831 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 753425 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=754443 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 754443 /var/tmp/bperf.sock 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 754443 ']' 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:16.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:16.088 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:16.346 [2024-12-09 05:48:10.343873] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:16.346 [2024-12-09 05:48:10.343962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754443 ] 00:53:16.346 [2024-12-09 05:48:10.412198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:16.346 [2024-12-09 05:48:10.473189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:16.620 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:16.620 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:53:16.620 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:53:16.620 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:53:16.620 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:53:16.914 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:16.914 05:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:17.197 nvme0n1 00:53:17.197 05:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:53:17.197 05:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:17.197 Running I/O for 2 seconds... 00:53:19.495 18234.00 IOPS, 71.23 MiB/s [2024-12-09T04:48:13.720Z] 18249.00 IOPS, 71.29 MiB/s 00:53:19.495 Latency(us) 00:53:19.495 [2024-12-09T04:48:13.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:19.495 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:53:19.495 nvme0n1 : 2.01 18252.06 71.30 0.00 0.00 6996.67 2718.53 9417.77 00:53:19.495 [2024-12-09T04:48:13.720Z] =================================================================================================================== 00:53:19.495 [2024-12-09T04:48:13.720Z] Total : 18252.06 71.30 0.00 0.00 6996.67 2718.53 9417.77 00:53:19.495 { 00:53:19.495 "results": [ 00:53:19.495 { 00:53:19.495 "job": "nvme0n1", 00:53:19.495 "core_mask": "0x2", 00:53:19.495 "workload": "randwrite", 00:53:19.495 "status": "finished", 00:53:19.495 "queue_depth": 128, 00:53:19.495 "io_size": 4096, 00:53:19.495 "runtime": 2.008431, 00:53:19.495 "iops": 18252.058447614083, 00:53:19.495 "mibps": 71.29710331099251, 00:53:19.495 "io_failed": 0, 00:53:19.495 "io_timeout": 0, 00:53:19.495 "avg_latency_us": 6996.672479919496, 00:53:19.495 "min_latency_us": 2718.5303703703703, 00:53:19.495 "max_latency_us": 9417.765925925925 00:53:19.495 } 00:53:19.495 ], 00:53:19.495 "core_count": 1 00:53:19.495 } 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:53:19.495 | select(.opcode=="crc32c") 00:53:19.495 | "\(.module_name) \(.executed)"' 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:53:19.495 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 754443 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 754443 ']' 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 754443 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:19.496 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 754443 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 754443' 00:53:19.753 killing process with pid 754443 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 754443 00:53:19.753 Received shutdown signal, test time was about 2.000000 seconds 00:53:19.753 00:53:19.753 Latency(us) 00:53:19.753 [2024-12-09T04:48:13.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:19.753 [2024-12-09T04:48:13.978Z] =================================================================================================================== 00:53:19.753 [2024-12-09T04:48:13.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 754443 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=754852 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 754852 /var/tmp/bperf.sock 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 754852 ']' 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:19.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:19.753 05:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:20.011 [2024-12-09 05:48:14.002088] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:20.011 [2024-12-09 05:48:14.002173] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754852 ] 00:53:20.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:20.011 Zero copy mechanism will not be used. 00:53:20.011 [2024-12-09 05:48:14.068105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:20.011 [2024-12-09 05:48:14.127141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:20.268 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:20.268 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:53:20.268 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:53:20.268 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:53:20.268 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:53:20.527 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:20.527 05:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:21.092 nvme0n1 00:53:21.092 05:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:53:21.092 05:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:21.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:21.092 Zero copy mechanism will not be used. 00:53:21.092 Running I/O for 2 seconds... 00:53:23.394 6583.00 IOPS, 822.88 MiB/s [2024-12-09T04:48:17.619Z] 6663.50 IOPS, 832.94 MiB/s 00:53:23.394 Latency(us) 00:53:23.394 [2024-12-09T04:48:17.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:23.394 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:53:23.394 nvme0n1 : 2.00 6659.36 832.42 0.00 0.00 2391.97 1747.63 4660.34 00:53:23.394 [2024-12-09T04:48:17.619Z] =================================================================================================================== 00:53:23.394 [2024-12-09T04:48:17.619Z] Total : 6659.36 832.42 0.00 0.00 2391.97 1747.63 4660.34 00:53:23.394 { 00:53:23.395 "results": [ 00:53:23.395 { 00:53:23.395 "job": "nvme0n1", 00:53:23.395 "core_mask": "0x2", 00:53:23.395 "workload": "randwrite", 00:53:23.395 "status": "finished", 00:53:23.395 "queue_depth": 16, 00:53:23.395 "io_size": 131072, 00:53:23.395 "runtime": 2.003645, 00:53:23.395 "iops": 6659.363310366856, 00:53:23.395 "mibps": 832.420413795857, 00:53:23.395 "io_failed": 0, 00:53:23.395 "io_timeout": 0, 00:53:23.395 "avg_latency_us": 2391.973225633638, 00:53:23.395 "min_latency_us": 1747.6266666666668, 00:53:23.395 "max_latency_us": 4660.337777777778 00:53:23.395 } 00:53:23.395 ], 00:53:23.395 "core_count": 1 00:53:23.395 } 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:53:23.395 | select(.opcode=="crc32c") 00:53:23.395 | "\(.module_name) \(.executed)"' 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 754852 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 754852 ']' 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 754852 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 754852 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 754852' 00:53:23.395 killing process with pid 754852 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 754852 00:53:23.395 Received shutdown signal, test time was about 2.000000 seconds 00:53:23.395 00:53:23.395 Latency(us) 00:53:23.395 [2024-12-09T04:48:17.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:23.395 [2024-12-09T04:48:17.620Z] =================================================================================================================== 00:53:23.395 [2024-12-09T04:48:17.620Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:23.395 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 754852 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 752976 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 752976 ']' 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 752976 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 752976 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 752976' 00:53:23.652 killing process with pid 752976 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 752976 00:53:23.652 05:48:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 752976 00:53:23.911 00:53:23.911 real 0m15.966s 00:53:23.911 user 0m31.044s 00:53:23.911 sys 0m4.560s 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:53:23.911 ************************************ 00:53:23.911 END TEST nvmf_digest_clean 00:53:23.911 ************************************ 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:23.911 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:53:24.168 ************************************ 00:53:24.168 START TEST nvmf_digest_error 00:53:24.168 ************************************ 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=755408 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 755408 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 755408 ']' 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:24.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:24.168 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.168 [2024-12-09 05:48:18.214486] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:24.168 [2024-12-09 05:48:18.214582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:24.168 [2024-12-09 05:48:18.284026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:24.168 [2024-12-09 05:48:18.338433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:24.168 [2024-12-09 05:48:18.338481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:24.168 [2024-12-09 05:48:18.338495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:24.168 [2024-12-09 05:48:18.338507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:24.168 [2024-12-09 05:48:18.338517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:24.168 [2024-12-09 05:48:18.338973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.425 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.426 [2024-12-09 05:48:18.463712] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.426 null0 00:53:24.426 [2024-12-09 05:48:18.582118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:24.426 [2024-12-09 05:48:18.606404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=755435 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 755435 /var/tmp/bperf.sock 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 755435 ']' 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:24.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:53:24.426 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:24.683 [2024-12-09 05:48:18.663573] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:24.683 [2024-12-09 05:48:18.663661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755435 ] 00:53:24.683 [2024-12-09 05:48:18.737928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:24.683 [2024-12-09 05:48:18.797472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:24.940 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:24.940 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:53:24.940 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:24.940 05:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:25.216 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:25.473 nvme0n1 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:53:25.473 05:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:25.730 Running I/O for 2 seconds... 00:53:25.730 [2024-12-09 05:48:19.752570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.752654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.768265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.768323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.768342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.784133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.784167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.784185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.800329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.800360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.800378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.811038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.811070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.811087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.826546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.826591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.826608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.842023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.842054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.842071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.854935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.854965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.854982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.867393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.867423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.867440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.880169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.880200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.880217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.893843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.893876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.906478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.906511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.906529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.917553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.917584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.917600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.930360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.930391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.930408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.730 [2024-12-09 05:48:19.943688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.730 [2024-12-09 05:48:19.943720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.730 [2024-12-09 05:48:19.943737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:19.960111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:19.960141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:19.960157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:19.976985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:19.977031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:19.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:19.987444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:19.987476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:19.987500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.003042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.003076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.003095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.018211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.018267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.018300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.029827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.029859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.029875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.044539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.044577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.044596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.062727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.062768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.062786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.078141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.078193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.089857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.089888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.089905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.104297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.104330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.104347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.119618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.119664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.119682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.136211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.136258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.136299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.152199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.152263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.166102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.166149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.178426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.178459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.191102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.191133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.191151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:25.987 [2024-12-09 05:48:20.205050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:25.987 [2024-12-09 05:48:20.205080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:25.987 [2024-12-09 05:48:20.205095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.218461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.218492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.218509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.232788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.232820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.232839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.247507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.247540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.247558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.263614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.263645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.263667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.275778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.275809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.275827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.292095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.292125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.292142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.302748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.302793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.317969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.318001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.318018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.333514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.333561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.333578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.349963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.349996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.350013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.361509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.244 [2024-12-09 05:48:20.361564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.244 [2024-12-09 05:48:20.361582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.244 [2024-12-09 05:48:20.375575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.375605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.375621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.388533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.388578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.388595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.400927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.400956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.417021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.417050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.417067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.430232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.430261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.430284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.444007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.444036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.444052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.245 [2024-12-09 05:48:20.456042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.245 [2024-12-09 05:48:20.456088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.245 [2024-12-09 05:48:20.456105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.470534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.470581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.470598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.486205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.486237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.486254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.498136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.498164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.498180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.515617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.515646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.515662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.530398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.530430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.530448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.545820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.545850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.545867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.561830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.561861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.561876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.575329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.575363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.575381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.588324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.588357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.600122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.600151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.600173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.615541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.615588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.502 [2024-12-09 05:48:20.615607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.502 [2024-12-09 05:48:20.629009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.502 [2024-12-09 05:48:20.629041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.629059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.641841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.641870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.641885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.656365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.656397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.656415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.670062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.670091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.670122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.682834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.682863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.682879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.698832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.698861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.698877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.503 [2024-12-09 05:48:20.714884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.503 [2024-12-09 05:48:20.714918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.503 [2024-12-09 05:48:20.714950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.730024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.730063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.730096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 17926.00 IOPS, 70.02 MiB/s [2024-12-09T04:48:20.985Z] [2024-12-09 05:48:20.743095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.743142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.743160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.755517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.755564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.755581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.770488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.770520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.770554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.783717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.783747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.783763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.796341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.796393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.796426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.810398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.810427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.810444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.824997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.825041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.825058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.835644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.835672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.835688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.850070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.850099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.850115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.864322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.864352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.864368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.877496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.877528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.877545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.890225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.890278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.890297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.902733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.902764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.902781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.918468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.918500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.918517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.931825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.931858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.931875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.947500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.947532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.947550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.958033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.958068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.958086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:26.760 [2024-12-09 05:48:20.973183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:26.760 [2024-12-09 05:48:20.973213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:26.760 [2024-12-09 05:48:20.973229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.017 [2024-12-09 05:48:20.988097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.017 [2024-12-09 05:48:20.988127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.017 [2024-12-09 05:48:20.988143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.017 [2024-12-09 05:48:21.004691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.017 [2024-12-09 05:48:21.004720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.017 [2024-12-09 05:48:21.004736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.017 [2024-12-09 05:48:21.021355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.017 [2024-12-09 05:48:21.021388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.017 [2024-12-09 05:48:21.021406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.017 [2024-12-09 05:48:21.038422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.017 [2024-12-09 05:48:21.038454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.017 [2024-12-09 05:48:21.038476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.051196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.051226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.062951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.062982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.063000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.076981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.077011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.091833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.091862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.091878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.106461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.106494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.106511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.118372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.118402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.118419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.131769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.131827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.143922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.143950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.143973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.158885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.158913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.158934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.175023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.175052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.175072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.189977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.190008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.190027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.201221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.201249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.201302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.216819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.216848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.216866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.018 [2024-12-09 05:48:21.230808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.018 [2024-12-09 05:48:21.230839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.018 [2024-12-09 05:48:21.230859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.275 [2024-12-09 05:48:21.244353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.244384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.244402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.260993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.261023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.261042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.276925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.276954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.276971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.293121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.293150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.293166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.308805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.308842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.308862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.320194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.334841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.334890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.334910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.349635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.349666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.349684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.363690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.363719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.363736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.378268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.378310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.378328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.388314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.388343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.388359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.402700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.402728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.402747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.419111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.419141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.419160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.432909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.432939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.446408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.446437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.446456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.462844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.462887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.462904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.475009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.475037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.475055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.276 [2024-12-09 05:48:21.488688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.276 [2024-12-09 05:48:21.488717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.276 [2024-12-09 05:48:21.488734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.503808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.503838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.503856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.519352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.519383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.519403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.533015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.533046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.533063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.547521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.547551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.547567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.561347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.561379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.561403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.574967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.574997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.575024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.587383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.587413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.587430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.600858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.600885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.616881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.616931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.631393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.631425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.631444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.643164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.643192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.643212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.656809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.656852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.656869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.670817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.670847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.670865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.534 [2024-12-09 05:48:21.685184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.534 [2024-12-09 05:48:21.685212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.534 [2024-12-09 05:48:21.685228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.535 [2024-12-09 05:48:21.697854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.535 [2024-12-09 05:48:21.697882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.535 [2024-12-09 05:48:21.697906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.535 [2024-12-09 05:48:21.712029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.535 [2024-12-09 05:48:21.712059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.535 [2024-12-09 05:48:21.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.535 [2024-12-09 05:48:21.726088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.535 [2024-12-09 05:48:21.726118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.535 [2024-12-09 05:48:21.726135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.535 18078.50 IOPS, 70.62 MiB/s [2024-12-09T04:48:21.760Z] [2024-12-09 05:48:21.738268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f747e0) 00:53:27.535 [2024-12-09 05:48:21.738319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:27.535 [2024-12-09 05:48:21.738337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:27.535 00:53:27.535 Latency(us) 00:53:27.535 [2024-12-09T04:48:21.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:27.535 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:53:27.535 nvme0n1 : 2.01 18095.07 70.68 0.00 0.00 7063.44 3373.89 23107.51 00:53:27.535 [2024-12-09T04:48:21.760Z] =================================================================================================================== 00:53:27.535 [2024-12-09T04:48:21.760Z] Total : 18095.07 70.68 0.00 0.00 7063.44 3373.89 23107.51 00:53:27.535 { 00:53:27.535 "results": [ 00:53:27.535 { 00:53:27.535 "job": "nvme0n1", 00:53:27.535 "core_mask": "0x2", 00:53:27.535 "workload": "randread", 00:53:27.535 "status": "finished", 00:53:27.535 "queue_depth": 128, 00:53:27.535 "io_size": 4096, 00:53:27.535 "runtime": 2.007121, 00:53:27.535 "iops": 18095.072494383745, 00:53:27.535 "mibps": 70.6838769311865, 00:53:27.535 "io_failed": 0, 00:53:27.535 "io_timeout": 0, 00:53:27.535 "avg_latency_us": 7063.441399859067, 00:53:27.535 "min_latency_us": 3373.8903703703704, 00:53:27.535 "max_latency_us": 23107.508148148147 00:53:27.535 } 00:53:27.535 ], 00:53:27.535 "core_count": 1 00:53:27.535 } 00:53:27.535 05:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:53:27.535 05:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:53:27.535 05:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:53:27.535 05:48:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:53:27.535 | .driver_specific 00:53:27.535 | .nvme_error 00:53:27.535 | .status_code 00:53:27.535 | .command_transient_transport_error' 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 755435 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 755435 ']' 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 755435 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755435 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755435' 00:53:28.099 killing process with pid 755435 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 755435 00:53:28.099 Received shutdown signal, test time was about 2.000000 seconds 00:53:28.099 00:53:28.099 Latency(us) 00:53:28.099 [2024-12-09T04:48:22.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:28.099 [2024-12-09T04:48:22.324Z] =================================================================================================================== 00:53:28.099 [2024-12-09T04:48:22.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 755435 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=755959 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 755959 /var/tmp/bperf.sock 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 755959 ']' 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:28.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:28.099 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:28.356 [2024-12-09 05:48:22.360420] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:28.356 [2024-12-09 05:48:22.360511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755959 ] 00:53:28.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:28.356 Zero copy mechanism will not be used. 00:53:28.356 [2024-12-09 05:48:22.428382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:28.356 [2024-12-09 05:48:22.483484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:28.614 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:28.614 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:53:28.614 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:28.614 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:28.871 05:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:29.129 nvme0n1 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:53:29.129 05:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:29.387 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:29.387 Zero copy mechanism will not be used. 00:53:29.387 Running I/O for 2 seconds... 00:53:29.387 [2024-12-09 05:48:23.444395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.444466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.444488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.451187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.451223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.451249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.458394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.458427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.458450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.466151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.466186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.473965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.473998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.474023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.481819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.481852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.481872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.489484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.489517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.489535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.494574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.494607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.494626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.498333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.498365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.498383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.387 [2024-12-09 05:48:23.504306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.387 [2024-12-09 05:48:23.504339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.387 [2024-12-09 05:48:23.504373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.509682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.509715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.515800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.515832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.515850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.523523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.523564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.523583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.529528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.529560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.529578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.535648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.535713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.541415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.541447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.541475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.547351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.547384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.547405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.553490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.553522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.553543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.556850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.556895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.556911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.563190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.563240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.563263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.569332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.569380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.569397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.575676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.575707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.575725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.581300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.581333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.581357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.585732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.585764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.585784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.590180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.590213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.590230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.595152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.595184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.595203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.599976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.600008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.600027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.388 [2024-12-09 05:48:23.605119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.388 [2024-12-09 05:48:23.605151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.388 [2024-12-09 05:48:23.605170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.612000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.612033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.612051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.617988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.618020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.618045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.623531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.623563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.628925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.628958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.628977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.633735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.633767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.633784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.638736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.638768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.638786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.645242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.645282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.645303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.651456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.651489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.656893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.656925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.656944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.661693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.661725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.661743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.664958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.664990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.665008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.670283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.670315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.670332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.676090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.676137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.676157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.682166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.682199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.682218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.687437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.687470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.687489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.693745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.693795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.699832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.699865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.699898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.704979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.705012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.705030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.709436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.709468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.709493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.714194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.714257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.647 [2024-12-09 05:48:23.719605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.647 [2024-12-09 05:48:23.719636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.647 [2024-12-09 05:48:23.719654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.727128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.727161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.727193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.733844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.733893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.739510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.739543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.744939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.744970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.744989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.750128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.750162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.750180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.755320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.755352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.755370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.759833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.759869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.759902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.764371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.764402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.764436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.769436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.769482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.774039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.774084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.774101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.778588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.778618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.778652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.783150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.783182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.783200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.787464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.787494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.787512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.792180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.792211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.792244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.796741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.796773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.801004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.801066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.805558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.805603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.810422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.810468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.810486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.815339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.815370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.815404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.819958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.819989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.820008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.825059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.825089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.830713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.830746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.836450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.836482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.836515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.842567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.842600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.842638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.848143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.848187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.848204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.853492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.853538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.853556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.648 [2024-12-09 05:48:23.858456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.648 [2024-12-09 05:48:23.858487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.648 [2024-12-09 05:48:23.858504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.649 [2024-12-09 05:48:23.863099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.649 [2024-12-09 05:48:23.863144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.649 [2024-12-09 05:48:23.863161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.649 [2024-12-09 05:48:23.868308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.649 [2024-12-09 05:48:23.868340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.649 [2024-12-09 05:48:23.868358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.874890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.874921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.874953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.882445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.882477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.882496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.887853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.887886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.887904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.893392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.893424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.893442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.898336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.898369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.898387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.903858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.903904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.903923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.908624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.908655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.908687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.913727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.913759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.913777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.918722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.918755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.918773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.924286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.924318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.924336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.929573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.929605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.935488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.935520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.941668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.941700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.941718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.949621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.949668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.949687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.955571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.955604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.955622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.961780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.961813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.961832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.968183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.968215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.968234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.974517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.974550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.980043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.980076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.986167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.986201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.986219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.993162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.993201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.993220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:23.998752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:23.998785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:23.998819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.003841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.003874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.003892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.008245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.008286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.008307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.011415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.011446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.011463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.016148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.016179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.016197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.020706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.020737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.020769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.026028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.026074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.026092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.031017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.031048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.031065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.908 [2024-12-09 05:48:24.035591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.908 [2024-12-09 05:48:24.035623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.908 [2024-12-09 05:48:24.035640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.040638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.040669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.040687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.045679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.045710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.045741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.050357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.050389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.050408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.055572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.055605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.060117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.060148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.060167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.064695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.064727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.064762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.069268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.069306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.069324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.073880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.073926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.073948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.078666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.078710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.078727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.083433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.083464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.083496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.087997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.088027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.092671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.092702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.092734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.097390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.097439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.101806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.101837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.101870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.107227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.107284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.114102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.114165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.121695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.121733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.121767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:29.909 [2024-12-09 05:48:24.127146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:29.909 [2024-12-09 05:48:24.127179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:29.909 [2024-12-09 05:48:24.127197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.168 [2024-12-09 05:48:24.133125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.168 [2024-12-09 05:48:24.133174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.168 [2024-12-09 05:48:24.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.168 [2024-12-09 05:48:24.138426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.168 [2024-12-09 05:48:24.138474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.168 [2024-12-09 05:48:24.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.168 [2024-12-09 05:48:24.143335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.168 [2024-12-09 05:48:24.143367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.168 [2024-12-09 05:48:24.143385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.168 [2024-12-09 05:48:24.147857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.168 [2024-12-09 05:48:24.147888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.168 [2024-12-09 05:48:24.147906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.168 [2024-12-09 05:48:24.150951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.168 [2024-12-09 05:48:24.150982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.168 [2024-12-09 05:48:24.151001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.155323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.155354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.155372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.159940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.165120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.165152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.165170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.170591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.170641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.175846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.175878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.175896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.181152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.181185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.181203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.185973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.186006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.186025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.190598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.190631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.190649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.195150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.195183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.195201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.199859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.199906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.199924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.204583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.204615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.204658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.209295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.209327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.209344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.213914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.213963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.219071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.219102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.219120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.224349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.224381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.229459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.229491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.229510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.234650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.234683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.234701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.239939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.239973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.239991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.245636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.245668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.245686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.251729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.251762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.251780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.257016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.257048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.257066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.262446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.262478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.262496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.267581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.267613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.267631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.273353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.273385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.273403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.280616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.280648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.280681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.288066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.288099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.288118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.169 [2024-12-09 05:48:24.293983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.169 [2024-12-09 05:48:24.294016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.169 [2024-12-09 05:48:24.294035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.299992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.300024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.300048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.306148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.306180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.310918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.310968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.315541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.315571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.315588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.320298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.320329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.320348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.325526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.325558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.325576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.330787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.330818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.330835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.336830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.336862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.341970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.342002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.342020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.346458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.346494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.346513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.351174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.351205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.351223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.355968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.356000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.356017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.360776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.360808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.360825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.365497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.365538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.365556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.370510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.370542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.370559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.375399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.375431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.375449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.380191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.380222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.380241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.385805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.385838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.385855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.170 [2024-12-09 05:48:24.391197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.170 [2024-12-09 05:48:24.391230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.170 [2024-12-09 05:48:24.391249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.396567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.396599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.396617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.402090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.402123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.402141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.409329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.409362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.409381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.417013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.417046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.417064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.424384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.424416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.432290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.432323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.432341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.429 5672.00 IOPS, 709.00 MiB/s [2024-12-09T04:48:24.654Z] [2024-12-09 05:48:24.440805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.440838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.440858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.448541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.448575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.448599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.456242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.456283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.456304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.463907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.463940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.463959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.471606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.471644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.479282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.479342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.429 [2024-12-09 05:48:24.479363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.429 [2024-12-09 05:48:24.486990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.429 [2024-12-09 05:48:24.487038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.487056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.494709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.494756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.499817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.499850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.499868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.506238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.506293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.506313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.513890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.513936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.521963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.522010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.522028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.529956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.530002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.530019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.537726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.537773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.537790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.544505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.544538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.544576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.551304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.551355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.551373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.557268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.557308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.557327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.562603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.562636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.562654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.568843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.568874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.568899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.574975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.580624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.580670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.580688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.585969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.586019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.586037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.592037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.592083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.592101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.598229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.598284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.603800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.603843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.603862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.610061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.610097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.610115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.615441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.615479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.615498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.622397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.622435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.622462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.629253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.629293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.629313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.634925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.634957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.634976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.640682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.640714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.640733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.645221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.645252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.645278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.430 [2024-12-09 05:48:24.649977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.430 [2024-12-09 05:48:24.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.430 [2024-12-09 05:48:24.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.654648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.654680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.689 [2024-12-09 05:48:24.654699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.659536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.659568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.689 [2024-12-09 05:48:24.659586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.665776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.665810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.689 [2024-12-09 05:48:24.665829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.670311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.670343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.689 [2024-12-09 05:48:24.670362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.674734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.674765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.689 [2024-12-09 05:48:24.674783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.689 [2024-12-09 05:48:24.678701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.689 [2024-12-09 05:48:24.678733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.678752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.683434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.683466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.689716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.689750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.689768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.694838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.694871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.694889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.699937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.699970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.699988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.704790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.704822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.710073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.710105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.710130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.716692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.716725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.716744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.724281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.724329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.724348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.732512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.732546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.732581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.740460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.740494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.740512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.748133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.748165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.748184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.755753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.755806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.763742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.763777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.763795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.771880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.771914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.771948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.779248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.779296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.779316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.786913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.786947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.794635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.794667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.794686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.802249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.802292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.802312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.809872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.809907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.809926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.817389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.817423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.817442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.824877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.824911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.824929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.831165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.831198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.831217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.836495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.836528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.836547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.842595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.842628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.842647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.848098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.848131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.848149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.853731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.853763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.690 [2024-12-09 05:48:24.853781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.690 [2024-12-09 05:48:24.859902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.690 [2024-12-09 05:48:24.859935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.859954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.863268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.863325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.869178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.869211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.869230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.874958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.874990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.875008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.880585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.880619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.880638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.885392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.885424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.885449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.890102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.890135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.890153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.895066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.895100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.895118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.900496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.900528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.900547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.905044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.905094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.691 [2024-12-09 05:48:24.909691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.691 [2024-12-09 05:48:24.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.691 [2024-12-09 05:48:24.909740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.914145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.914176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.914194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.918446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.918495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.922770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.922802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.922820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.927379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.927411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.927429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.932531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.932564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.937316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.937347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.937365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.941869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.941913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.941930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.946520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.946551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.946569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.951803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.951869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.957018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.957050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.957067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.962676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.962721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.967298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.967331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.967360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.972187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.972238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.976987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.977019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.977038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.981497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.986010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.986042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.986060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.990749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.990781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.990799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.995083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.995114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.950 [2024-12-09 05:48:24.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.950 [2024-12-09 05:48:24.999539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.950 [2024-12-09 05:48:24.999571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:24.999589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.004109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.004141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.004158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.008783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.008854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.014462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.021819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.021852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.021870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.028061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.028091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.028108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.033794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.033826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.033844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.039098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.039131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.039149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.044829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.044878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.044896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.050015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.050047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.050064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.054502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.054534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.054552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.059247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.059285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.059305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.064166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.064198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.064215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.068549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.068581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.068599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.073748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.073797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.073814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.080347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.080379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.080397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.087805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.087836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.087869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.093722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.093769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.093787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.099937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.099969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.099987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.104509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.104541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.104584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.109253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.109295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.109314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.113782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.113813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.119180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.119213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.119245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.126071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.126102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.126120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.133153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.133185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.133203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.138414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.138446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.144181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.144214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.144233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.951 [2024-12-09 05:48:25.148783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.951 [2024-12-09 05:48:25.148814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.951 [2024-12-09 05:48:25.148832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:30.952 [2024-12-09 05:48:25.153802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.952 [2024-12-09 05:48:25.153843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.952 [2024-12-09 05:48:25.153862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:30.952 [2024-12-09 05:48:25.159676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.952 [2024-12-09 05:48:25.159708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.952 [2024-12-09 05:48:25.159726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:30.952 [2024-12-09 05:48:25.165447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.952 [2024-12-09 05:48:25.165480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.952 [2024-12-09 05:48:25.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:30.952 [2024-12-09 05:48:25.170831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:30.952 [2024-12-09 05:48:25.170880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:30.952 [2024-12-09 05:48:25.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.210 [2024-12-09 05:48:25.176334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.210 [2024-12-09 05:48:25.176367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.210 [2024-12-09 05:48:25.176386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.210 [2024-12-09 05:48:25.182194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.210 [2024-12-09 05:48:25.182226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.210 [2024-12-09 05:48:25.182244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.210 [2024-12-09 05:48:25.188022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.210 [2024-12-09 05:48:25.188055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.210 [2024-12-09 05:48:25.188073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.194024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.194055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.194072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.199788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.199853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.206048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.206081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.206099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.212148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.212180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.212198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.217935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.217982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.223941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.223973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.223992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.230158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.230190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.235880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.235912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.235931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.241728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.241761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.247456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.247489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.247507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.253249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.253305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.253334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.258639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.258670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.258689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.264244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.264285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.264306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.269565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.269598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.274204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.274235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.278678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.278709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.278727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.283032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.283078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.283096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.287348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.287380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.287397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.291822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.291853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.291870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.296255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.296293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.296312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.300685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.300716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.300734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.305123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.305171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.309570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.309619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.313998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.314029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.314046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.318438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.318468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.318485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.322957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.322989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.211 [2024-12-09 05:48:25.323005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.211 [2024-12-09 05:48:25.327467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.211 [2024-12-09 05:48:25.327498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.327515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.332083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.332114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.332139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.337177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.337208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.337226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.342706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.342755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.342773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.348587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.348620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.348638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.353707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.359024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.359057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.359090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.364303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.364336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.364354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.370257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.370296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.370315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.374947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.374978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.374996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.379499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.379537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.379556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.384101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.384133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.384151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.390155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.390187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.390205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.395598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.395629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.395663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.401535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.401584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.401603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.407044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.407097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.412387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.412419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.412437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.418342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.418374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.418393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.424364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.424416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:31.212 [2024-12-09 05:48:25.430358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.212 [2024-12-09 05:48:25.430391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.212 [2024-12-09 05:48:25.430409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:53:31.470 [2024-12-09 05:48:25.436886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2273c20) 00:53:31.470 [2024-12-09 05:48:25.436934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:31.470 [2024-12-09 05:48:25.436952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:31.470 5553.00 IOPS, 694.12 MiB/s 00:53:31.470 Latency(us) 00:53:31.470 [2024-12-09T04:48:25.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:31.470 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:53:31.470 nvme0n1 : 2.00 5551.28 693.91 0.00 0.00 2878.07 621.99 13786.83 00:53:31.470 [2024-12-09T04:48:25.695Z] =================================================================================================================== 00:53:31.470 [2024-12-09T04:48:25.695Z] Total : 5551.28 693.91 0.00 0.00 2878.07 621.99 13786.83 00:53:31.470 { 00:53:31.470 "results": [ 00:53:31.470 { 00:53:31.470 "job": "nvme0n1", 00:53:31.470 "core_mask": "0x2", 00:53:31.470 "workload": "randread", 00:53:31.470 "status": "finished", 00:53:31.470 "queue_depth": 16, 00:53:31.470 "io_size": 131072, 00:53:31.470 "runtime": 2.003503, 00:53:31.470 "iops": 5551.276938442318, 00:53:31.470 "mibps": 693.9096173052898, 00:53:31.470 "io_failed": 0, 00:53:31.470 "io_timeout": 0, 00:53:31.470 "avg_latency_us": 2878.0653242489025, 00:53:31.470 "min_latency_us": 621.9851851851852, 00:53:31.470 "max_latency_us": 13786.832592592593 00:53:31.470 } 00:53:31.470 ], 00:53:31.470 "core_count": 1 00:53:31.470 } 00:53:31.470 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:53:31.470 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:53:31.470 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:53:31.470 | .driver_specific 00:53:31.470 | .nvme_error 00:53:31.470 | .status_code 00:53:31.470 | .command_transient_transport_error' 00:53:31.470 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 359 > 0 )) 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 755959 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 755959 ']' 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 755959 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755959 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755959' 00:53:31.728 killing process with pid 755959 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 755959 00:53:31.728 Received shutdown signal, test time was about 2.000000 seconds 00:53:31.728 00:53:31.728 Latency(us) 00:53:31.728 [2024-12-09T04:48:25.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:31.728 [2024-12-09T04:48:25.953Z] =================================================================================================================== 00:53:31.728 [2024-12-09T04:48:25.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:31.728 05:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 755959 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=756370 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 756370 /var/tmp/bperf.sock 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 756370 ']' 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:31.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:31.986 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:31.986 [2024-12-09 05:48:26.104224] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:31.986 [2024-12-09 05:48:26.104331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756370 ] 00:53:31.986 [2024-12-09 05:48:26.169489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:32.244 [2024-12-09 05:48:26.224952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:32.244 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:32.244 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:53:32.244 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:32.244 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:32.501 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:53:32.501 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.501 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:32.502 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.502 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:32.502 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:32.759 nvme0n1 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:53:32.760 05:48:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:33.018 Running I/O for 2 seconds... 00:53:33.018 [2024-12-09 05:48:27.073587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb760 00:53:33.018 [2024-12-09 05:48:27.074936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.074989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.085633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0bc0 00:53:33.018 [2024-12-09 05:48:27.086955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.087001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.097478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eecc78 00:53:33.018 [2024-12-09 05:48:27.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.098843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.109384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eef6a8 00:53:33.018 [2024-12-09 05:48:27.110709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.110737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.121437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee88f8 00:53:33.018 [2024-12-09 05:48:27.122758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.133151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efc128 00:53:33.018 [2024-12-09 05:48:27.134178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.134235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.144364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6890 00:53:33.018 [2024-12-09 05:48:27.145269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.145309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.156059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeaef0 00:53:33.018 [2024-12-09 05:48:27.157002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.157046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.170697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6458 00:53:33.018 [2024-12-09 05:48:27.172421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.172466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.182467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eef6a8 00:53:33.018 [2024-12-09 05:48:27.184094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.184140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.191719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0350 00:53:33.018 [2024-12-09 05:48:27.192776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.192821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.204015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7100 00:53:33.018 [2024-12-09 05:48:27.204749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.204796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:53:33.018 [2024-12-09 05:48:27.215675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee7c50 00:53:33.018 [2024-12-09 05:48:27.216480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.018 [2024-12-09 05:48:27.216538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:33.019 [2024-12-09 05:48:27.227436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee6fa8 00:53:33.019 [2024-12-09 05:48:27.228295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.019 [2024-12-09 05:48:27.228350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:53:33.019 [2024-12-09 05:48:27.239698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef46d0 00:53:33.019 [2024-12-09 05:48:27.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.019 [2024-12-09 05:48:27.240667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.253882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee23b8 00:53:33.290 [2024-12-09 05:48:27.255259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.255297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.264008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efb480 00:53:33.290 [2024-12-09 05:48:27.264743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.264773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.276174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee5ec8 00:53:33.290 [2024-12-09 05:48:27.277120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.277184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.288337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7da8 00:53:33.290 [2024-12-09 05:48:27.289190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.289235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.300554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6890 00:53:33.290 [2024-12-09 05:48:27.301582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.301612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.311695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef1868 00:53:33.290 [2024-12-09 05:48:27.313532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.313560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.321765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef81e0 00:53:33.290 [2024-12-09 05:48:27.322512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.322540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.334133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee0ea0 00:53:33.290 [2024-12-09 05:48:27.335019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.335062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.346211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7da8 00:53:33.290 [2024-12-09 05:48:27.347130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.347174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.357613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0ff8 00:53:33.290 [2024-12-09 05:48:27.358486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.358515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.370009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee84c0 00:53:33.290 [2024-12-09 05:48:27.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.371047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.383193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee23b8 00:53:33.290 [2024-12-09 05:48:27.384585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.384615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.395600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efac10 00:53:33.290 [2024-12-09 05:48:27.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.396953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.405486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eebb98 00:53:33.290 [2024-12-09 05:48:27.406177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.406207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.417767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0ff8 00:53:33.290 [2024-12-09 05:48:27.418716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:53:33.290 [2024-12-09 05:48:27.429928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee23b8 00:53:33.290 [2024-12-09 05:48:27.430939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.290 [2024-12-09 05:48:27.430983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.443204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee95a0 00:53:33.291 [2024-12-09 05:48:27.444884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.444945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.455827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0350 00:53:33.291 [2024-12-09 05:48:27.457650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.457722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.464354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efef90 00:53:33.291 [2024-12-09 05:48:27.465066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.465126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.479147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efda78 00:53:33.291 [2024-12-09 05:48:27.480756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.480818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.487091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8618 00:53:33.291 [2024-12-09 05:48:27.487801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.487860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.499874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edfdc0 00:53:33.291 [2024-12-09 05:48:27.500522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.500553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:53:33.291 [2024-12-09 05:48:27.512714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7538 00:53:33.291 [2024-12-09 05:48:27.514134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.291 [2024-12-09 05:48:27.514165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.525295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee0630 00:53:33.549 [2024-12-09 05:48:27.526739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.536959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eedd58 00:53:33.549 [2024-12-09 05:48:27.537982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.538012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.548907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7da8 00:53:33.549 [2024-12-09 05:48:27.550042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.562147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7538 00:53:33.549 [2024-12-09 05:48:27.563870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.563900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.570400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6020 00:53:33.549 [2024-12-09 05:48:27.571088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.571131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.583980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef20d8 00:53:33.549 [2024-12-09 05:48:27.585228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.585277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.596378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeff18 00:53:33.549 [2024-12-09 05:48:27.597775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.597836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.607377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efe2e8 00:53:33.549 [2024-12-09 05:48:27.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.608449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.618096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee5220 00:53:33.549 [2024-12-09 05:48:27.619060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.619101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.631365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef2948 00:53:33.549 [2024-12-09 05:48:27.632590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.632632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:33.549 [2024-12-09 05:48:27.643690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee0630 00:53:33.549 [2024-12-09 05:48:27.645080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.549 [2024-12-09 05:48:27.645142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.656124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee6300 00:53:33.550 [2024-12-09 05:48:27.657767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.668441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6890 00:53:33.550 [2024-12-09 05:48:27.670175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.670205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.676764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee1710 00:53:33.550 [2024-12-09 05:48:27.677508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.677537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.688887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef92c0 00:53:33.550 [2024-12-09 05:48:27.689655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.689699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.699941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee38d0 00:53:33.550 [2024-12-09 05:48:27.700676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.700704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.713225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8618 00:53:33.550 [2024-12-09 05:48:27.714170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.714232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.725468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4298 00:53:33.550 [2024-12-09 05:48:27.726556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.736813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4f40 00:53:33.550 [2024-12-09 05:48:27.737858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.737912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.750507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eec408 00:53:33.550 [2024-12-09 05:48:27.751883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.751928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:33.550 [2024-12-09 05:48:27.762193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7538 00:53:33.550 [2024-12-09 05:48:27.763806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.550 [2024-12-09 05:48:27.763871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.774999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee7c50 00:53:33.808 [2024-12-09 05:48:27.776846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.776876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.786968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee8088 00:53:33.808 [2024-12-09 05:48:27.788634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.788678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.798471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef9f68 00:53:33.808 [2024-12-09 05:48:27.800129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.800174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.807489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efeb58 00:53:33.808 [2024-12-09 05:48:27.808446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.808491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.819046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eed4e8 00:53:33.808 [2024-12-09 05:48:27.819858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.819912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.832960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef5378 00:53:33.808 [2024-12-09 05:48:27.834189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.834233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.843960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efdeb0 00:53:33.808 [2024-12-09 05:48:27.845055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.856080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef1ca0 00:53:33.808 [2024-12-09 05:48:27.857476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.808 [2024-12-09 05:48:27.857522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:53:33.808 [2024-12-09 05:48:27.867850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb760 00:53:33.809 [2024-12-09 05:48:27.869125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.869157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.879334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee0a68 00:53:33.809 [2024-12-09 05:48:27.880324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.880378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.891841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb328 00:53:33.809 [2024-12-09 05:48:27.893276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.893331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.903499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee2c28 00:53:33.809 [2024-12-09 05:48:27.904810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.904856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.915158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee6fa8 00:53:33.809 [2024-12-09 05:48:27.916138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.916169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.926250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee01f8 00:53:33.809 [2024-12-09 05:48:27.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.927149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.939379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efcdd0 00:53:33.809 [2024-12-09 05:48:27.940820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.940877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.950636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eecc78 00:53:33.809 [2024-12-09 05:48:27.951914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.961584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee3d08 00:53:33.809 [2024-12-09 05:48:27.962908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.962944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.972836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb328 00:53:33.809 [2024-12-09 05:48:27.973699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.973741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.983949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6890 00:53:33.809 [2024-12-09 05:48:27.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.984734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:27.996232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8e88 00:53:33.809 [2024-12-09 05:48:27.997248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:27.997301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:28.008477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef1ca0 00:53:33.809 [2024-12-09 05:48:28.009200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:28.009244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:53:33.809 [2024-12-09 05:48:28.021023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef3e60 00:53:33.809 [2024-12-09 05:48:28.022253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:33.809 [2024-12-09 05:48:28.022292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.034436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee95a0 00:53:34.068 [2024-12-09 05:48:28.035834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.035866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.045157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efeb58 00:53:34.068 [2024-12-09 05:48:28.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.046400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.056726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edfdc0 00:53:34.068 [2024-12-09 05:48:28.057499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.057545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:34.068 21460.00 IOPS, 83.83 MiB/s [2024-12-09T04:48:28.293Z] [2024-12-09 05:48:28.070893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0ff8 00:53:34.068 [2024-12-09 05:48:28.072141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.072186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.081928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee8088 00:53:34.068 [2024-12-09 05:48:28.083152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.083183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.094893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeaef0 00:53:34.068 [2024-12-09 05:48:28.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.096165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.109127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efa3a0 00:53:34.068 [2024-12-09 05:48:28.110993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.111039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.117616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeea00 00:53:34.068 [2024-12-09 05:48:28.118611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.118643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.129922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee5658 00:53:34.068 [2024-12-09 05:48:28.130634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.130678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.143507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efac10 00:53:34.068 [2024-12-09 05:48:28.144810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.144853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.155719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee1f80 00:53:34.068 [2024-12-09 05:48:28.157278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.157322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.164620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee01f8 00:53:34.068 [2024-12-09 05:48:28.165485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.165529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.179116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef1430 00:53:34.068 [2024-12-09 05:48:28.180489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.180544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.190217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8618 00:53:34.068 [2024-12-09 05:48:28.191462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.191509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.202463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef92c0 00:53:34.068 [2024-12-09 05:48:28.203983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.204027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.214165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee8d30 00:53:34.068 [2024-12-09 05:48:28.215654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.215697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.225815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6020 00:53:34.068 [2024-12-09 05:48:28.226883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.226950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.236953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee23b8 00:53:34.068 [2024-12-09 05:48:28.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.237901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.248206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef2d80 00:53:34.068 [2024-12-09 05:48:28.249038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.249084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.261815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eefae0 00:53:34.068 [2024-12-09 05:48:28.263258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.068 [2024-12-09 05:48:28.263309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:34.068 [2024-12-09 05:48:28.272982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee5a90 00:53:34.068 [2024-12-09 05:48:28.274164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.069 [2024-12-09 05:48:28.274228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:53:34.069 [2024-12-09 05:48:28.283912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee9168 00:53:34.069 [2024-12-09 05:48:28.285063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.069 [2024-12-09 05:48:28.285109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:53:34.327 [2024-12-09 05:48:28.296075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efbcf0 00:53:34.327 [2024-12-09 05:48:28.297252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.327 [2024-12-09 05:48:28.297306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.307076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee6738 00:53:34.328 [2024-12-09 05:48:28.308161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.308212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.321516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7970 00:53:34.328 [2024-12-09 05:48:28.323200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.323244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.333154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0350 00:53:34.328 [2024-12-09 05:48:28.334787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.334832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.341621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eea680 00:53:34.328 [2024-12-09 05:48:28.342474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.342519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.356269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7970 00:53:34.328 [2024-12-09 05:48:28.357668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.357698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.367578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eec840 00:53:34.328 [2024-12-09 05:48:28.368826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.368871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.379732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef5378 00:53:34.328 [2024-12-09 05:48:28.381238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.381291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.388758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee4140 00:53:34.328 [2024-12-09 05:48:28.389546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.389592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.400845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edf550 00:53:34.328 [2024-12-09 05:48:28.401654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.401699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.413098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edece0 00:53:34.328 [2024-12-09 05:48:28.413951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.413997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.425343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee7818 00:53:34.328 [2024-12-09 05:48:28.426140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.426171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.439859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb760 00:53:34.328 [2024-12-09 05:48:28.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.451034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eedd58 00:53:34.328 [2024-12-09 05:48:28.451932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.451986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.462192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efc128 00:53:34.328 [2024-12-09 05:48:28.462941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.462972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.475854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee8088 00:53:34.328 [2024-12-09 05:48:28.477250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.477302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.486998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee6300 00:53:34.328 [2024-12-09 05:48:28.488248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.488301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.498135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ede8a8 00:53:34.328 [2024-12-09 05:48:28.499255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.509419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef7100 00:53:34.328 [2024-12-09 05:48:28.510375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.510432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.521857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4f40 00:53:34.328 [2024-12-09 05:48:28.523133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.523179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.534294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efeb58 00:53:34.328 [2024-12-09 05:48:28.535286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.535317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:34.328 [2024-12-09 05:48:28.545522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8a50 00:53:34.328 [2024-12-09 05:48:28.546359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.328 [2024-12-09 05:48:28.546390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:34.587 [2024-12-09 05:48:28.558867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee9e10 00:53:34.587 [2024-12-09 05:48:28.560322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.587 [2024-12-09 05:48:28.560368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:34.587 [2024-12-09 05:48:28.570171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efa3a0 00:53:34.587 [2024-12-09 05:48:28.571528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.587 [2024-12-09 05:48:28.571589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:53:34.587 [2024-12-09 05:48:28.581828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ede038 00:53:34.587 [2024-12-09 05:48:28.582811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.587 [2024-12-09 05:48:28.582848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:53:34.587 [2024-12-09 05:48:28.593005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef92c0 00:53:34.587 [2024-12-09 05:48:28.593805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.587 [2024-12-09 05:48:28.593836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:53:34.587 [2024-12-09 05:48:28.604104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eee190 00:53:34.587 [2024-12-09 05:48:28.604805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.604845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.617746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edf550 00:53:34.588 [2024-12-09 05:48:28.619088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.619135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.628975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efc560 00:53:34.588 [2024-12-09 05:48:28.630104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.630163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.640097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4f40 00:53:34.588 [2024-12-09 05:48:28.641128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.641174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.651239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0ff8 00:53:34.588 [2024-12-09 05:48:28.652130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.652190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.662426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef3e60 00:53:34.588 [2024-12-09 05:48:28.663153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.663197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.677220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeea00 00:53:34.588 [2024-12-09 05:48:28.678867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.678926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.685763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eef6a8 00:53:34.588 [2024-12-09 05:48:28.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.686670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.697592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee99d8 00:53:34.588 [2024-12-09 05:48:28.698311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.698357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.711606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee8088 00:53:34.588 [2024-12-09 05:48:28.712792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.712822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.722948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef31b8 00:53:34.588 [2024-12-09 05:48:28.723989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.724035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.734074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef5be8 00:53:34.588 [2024-12-09 05:48:28.734947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.735006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.745347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef8a50 00:53:34.588 [2024-12-09 05:48:28.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.746141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.757650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee01f8 00:53:34.588 [2024-12-09 05:48:28.758708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.758752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.769397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efc128 00:53:34.588 [2024-12-09 05:48:28.770321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.770366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.783406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee9168 00:53:34.588 [2024-12-09 05:48:28.784781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.794719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef20d8 00:53:34.588 [2024-12-09 05:48:28.795971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.796002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:34.588 [2024-12-09 05:48:28.805782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eeb328 00:53:34.588 [2024-12-09 05:48:28.806857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.588 [2024-12-09 05:48:28.806902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.818120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee4578 00:53:34.847 [2024-12-09 05:48:28.819536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.819566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.830049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee4de8 00:53:34.847 [2024-12-09 05:48:28.831304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.831348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.841861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee3060 00:53:34.847 [2024-12-09 05:48:28.842774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.842814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.853116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef0788 00:53:34.847 [2024-12-09 05:48:28.853865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.866629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efcdd0 00:53:34.847 [2024-12-09 05:48:28.868026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.868071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.878006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efe720 00:53:34.847 [2024-12-09 05:48:28.879323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.847 [2024-12-09 05:48:28.879354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:34.847 [2024-12-09 05:48:28.889325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ede470 00:53:34.847 [2024-12-09 05:48:28.890425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.890470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.900630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efdeb0 00:53:34.848 [2024-12-09 05:48:28.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.901632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.911915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4b08 00:53:34.848 [2024-12-09 05:48:28.912872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.912903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.925674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee2c28 00:53:34.848 [2024-12-09 05:48:28.926715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.926745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.936857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eec840 00:53:34.848 [2024-12-09 05:48:28.937744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.937784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.948050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016edfdc0 00:53:34.848 [2024-12-09 05:48:28.948829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.948860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.962858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eebb98 00:53:34.848 [2024-12-09 05:48:28.964780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.964826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.974804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efda78 00:53:34.848 [2024-12-09 05:48:28.976588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.976634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.983411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef4b08 00:53:34.848 [2024-12-09 05:48:28.984410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.984455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:28.995224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ef6cc8 00:53:34.848 [2024-12-09 05:48:28.996111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:28.996179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:29.010030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee9168 00:53:34.848 [2024-12-09 05:48:29.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.011681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:29.021829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016efb048 00:53:34.848 [2024-12-09 05:48:29.023280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.023353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:29.033680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ede8a8 00:53:34.848 [2024-12-09 05:48:29.034798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.034839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:29.044801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee0630 00:53:34.848 [2024-12-09 05:48:29.046715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.046759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:53:34.848 [2024-12-09 05:48:29.057978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016ee4140 00:53:34.848 [2024-12-09 05:48:29.059448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.059480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:53:34.848 21478.50 IOPS, 83.90 MiB/s [2024-12-09T04:48:29.073Z] [2024-12-09 05:48:29.069540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78cd50) with pdu=0x200016eef6a8 00:53:34.848 [2024-12-09 05:48:29.070254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:34.848 [2024-12-09 05:48:29.070306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:53:35.107 00:53:35.107 Latency(us) 00:53:35.107 [2024-12-09T04:48:29.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:35.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:53:35.107 nvme0n1 : 2.01 21478.58 83.90 0.00 0.00 5950.39 2791.35 16602.45 00:53:35.107 [2024-12-09T04:48:29.332Z] =================================================================================================================== 00:53:35.107 [2024-12-09T04:48:29.332Z] Total : 21478.58 83.90 0.00 0.00 5950.39 2791.35 16602.45 00:53:35.107 { 00:53:35.107 "results": [ 00:53:35.107 { 00:53:35.107 "job": "nvme0n1", 00:53:35.107 "core_mask": "0x2", 00:53:35.107 "workload": "randwrite", 00:53:35.107 "status": "finished", 00:53:35.107 "queue_depth": 128, 00:53:35.107 "io_size": 4096, 00:53:35.107 "runtime": 2.005952, 00:53:35.107 "iops": 21478.5797466739, 00:53:35.107 "mibps": 83.90070213544492, 00:53:35.107 "io_failed": 0, 00:53:35.107 "io_timeout": 0, 00:53:35.107 "avg_latency_us": 5950.385089250792, 00:53:35.107 "min_latency_us": 2791.348148148148, 00:53:35.107 "max_latency_us": 16602.453333333335 00:53:35.107 } 00:53:35.107 ], 00:53:35.107 "core_count": 1 00:53:35.107 } 00:53:35.107 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:53:35.107 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:53:35.107 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:53:35.107 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:53:35.107 | .driver_specific 00:53:35.107 | .nvme_error 00:53:35.107 | .status_code 00:53:35.107 | .command_transient_transport_error' 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 756370 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 756370 ']' 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 756370 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756370 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756370' 00:53:35.365 killing process with pid 756370 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 756370 00:53:35.365 Received shutdown signal, test time was about 2.000000 seconds 00:53:35.365 00:53:35.365 Latency(us) 00:53:35.365 [2024-12-09T04:48:29.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:35.365 [2024-12-09T04:48:29.590Z] =================================================================================================================== 00:53:35.365 [2024-12-09T04:48:29.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:35.365 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 756370 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=756780 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 756780 /var/tmp/bperf.sock 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 756780 ']' 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:53:35.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:35.623 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:35.623 [2024-12-09 05:48:29.716612] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:35.623 [2024-12-09 05:48:29.716701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756780 ] 00:53:35.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:35.623 Zero copy mechanism will not be used. 00:53:35.623 [2024-12-09 05:48:29.782918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:35.623 [2024-12-09 05:48:29.837515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:35.882 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:35.882 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:53:35.882 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:35.882 05:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:36.140 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:53:36.399 nvme0n1 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:53:36.399 05:48:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:53:36.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:53:36.659 Zero copy mechanism will not be used. 00:53:36.659 Running I/O for 2 seconds... 00:53:36.659 [2024-12-09 05:48:30.705529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.705714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.705755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.711316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.711485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.711517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.718715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.718810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.718839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.725409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.725569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.725600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.731781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.731980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.732010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.738327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.738527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.738557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.745615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.745785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.745816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.752945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.753104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.753134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.759697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.759878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.759908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.767114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.767315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.767355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.774484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.774655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.774685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.781705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.781878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.781908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.788878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.788980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.789009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.796036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.796226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.796255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.803308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.803470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.803500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.810324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.810396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.817254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.817407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.817451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.824465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.824655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.824685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.831401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.831594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.831629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.838884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.839037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.839067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.659 [2024-12-09 05:48:30.846231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.659 [2024-12-09 05:48:30.846410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.659 [2024-12-09 05:48:30.846440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.660 [2024-12-09 05:48:30.854011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.660 [2024-12-09 05:48:30.854219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.660 [2024-12-09 05:48:30.854248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.660 [2024-12-09 05:48:30.860924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.660 [2024-12-09 05:48:30.861040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.660 [2024-12-09 05:48:30.861069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.660 [2024-12-09 05:48:30.867116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.660 [2024-12-09 05:48:30.867189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.660 [2024-12-09 05:48:30.867216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.660 [2024-12-09 05:48:30.872756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.660 [2024-12-09 05:48:30.872913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.660 [2024-12-09 05:48:30.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.660 [2024-12-09 05:48:30.877946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.660 [2024-12-09 05:48:30.878028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.660 [2024-12-09 05:48:30.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.883213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.883406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.883436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.889722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.889833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.889863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.896115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.896314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.896345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.903414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.903596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.903625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.910228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.910359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.910390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.915691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.915787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.915814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.920801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.920992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.921021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.926206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.926306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.926334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.931734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.931823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.931851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.937493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.937567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.918 [2024-12-09 05:48:30.937594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.918 [2024-12-09 05:48:30.942812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.918 [2024-12-09 05:48:30.942893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.942921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.947658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.947741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.947768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.952504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.952621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.952650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.957542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.957628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.957662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.962481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.962555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.962585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.967446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.967533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.967561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.972431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.972518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.972545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.977278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.977349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.977377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.982089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.982198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.982233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.987041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.987117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.987143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.992294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.992404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.992434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:30.997512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:30.997592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:30.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.002466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.002545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.002572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.007262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.007357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.007384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.012731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.012882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.017691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.017814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.017844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.022889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.023087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.023116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.029235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.029463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.034392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.034486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.034513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.039538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.039684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.039713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.044683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.044760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.044787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.051639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.051999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.052029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.058027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.058126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.058155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.065299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.065489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.065519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.071487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.071671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.071701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.077922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.078123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.078152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.084356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.084541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.090452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.090615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.090644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.096759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.096936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.096966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.103149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.103336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.103366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.108855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.108957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.108986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.114615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.114807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.114838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.120802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.120997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.121028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.127326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.127494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.127524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.133962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.134035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.134068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:36.919 [2024-12-09 05:48:31.141403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:36.919 [2024-12-09 05:48:31.141578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:36.919 [2024-12-09 05:48:31.141608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.177 [2024-12-09 05:48:31.148340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.177 [2024-12-09 05:48:31.148508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.177 [2024-12-09 05:48:31.148539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.177 [2024-12-09 05:48:31.155011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.177 [2024-12-09 05:48:31.155149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.155180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.160192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.160321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.165227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.165337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.165365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.170229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.170371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.175210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.175343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.180068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.180197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.180226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.186088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.186254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.186295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.192332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.192481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.192510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.198892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.199091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.199120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.205593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.205756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.205785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.212677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.212872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.219454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.219549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.219577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.224545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.224643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.229513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.229599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.229629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.234478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.234566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.234593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.239479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.239548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.239576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.244600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.244689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.244716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.249620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.249699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.249726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.255004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.255106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.255133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.260300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.260378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.260405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.265309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.265394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.270206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.270286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.178 [2024-12-09 05:48:31.270315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.178 [2024-12-09 05:48:31.275234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.178 [2024-12-09 05:48:31.275321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.275349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.280451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.280578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.280613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.285653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.285826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.291111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.291205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.291232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.296386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.296493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.296521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.301304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.301393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.301419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.306250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.306379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.306407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.311948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.312032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.318602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.318752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.318780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.324241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.324360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.324387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.329754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.329897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.329925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.334664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.334795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.334823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.339700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.339915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.339945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.345935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.346143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.351350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.351454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.351481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.356429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.356582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.356612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.361267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.361431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.361461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.366322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.366450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.366480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.371311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.371428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.371456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.376448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.376603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.376634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.383081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.383266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.179 [2024-12-09 05:48:31.383304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.179 [2024-12-09 05:48:31.389708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.179 [2024-12-09 05:48:31.389830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.180 [2024-12-09 05:48:31.389858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.180 [2024-12-09 05:48:31.396327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.180 [2024-12-09 05:48:31.396439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.180 [2024-12-09 05:48:31.396466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.402691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.402767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.402796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.408411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.408482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.414214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.414302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.414331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.419891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.419990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.420018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.425345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.425426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.431103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.431176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.431203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.436569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.436681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.436711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.442314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.442414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.437 [2024-12-09 05:48:31.442441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.437 [2024-12-09 05:48:31.448036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.437 [2024-12-09 05:48:31.448107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.448135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.453715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.453816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.453843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.459256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.459375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.459403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.464933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.465012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.465040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.470509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.470594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.470621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.475862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.475943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.475970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.480924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.481006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.481034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.486022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.486105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.486133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.490815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.490898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.490926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.496029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.496138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.496166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.502107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.502289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.502318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.508242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.508419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.508450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.514434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.514622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.514652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.520671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.520859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.520887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.526972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.527144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.527173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.533261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.533450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.533480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.539589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.539767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.539796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.545908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.546083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.546110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.552219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.552418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.552447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.558520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.558706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.558735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.564787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.564979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.571086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.571269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.571304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.577536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.577731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.583765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.583941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.583969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.588827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.588961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.588988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.593670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.593757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.593785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.598858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.598996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.438 [2024-12-09 05:48:31.599023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.438 [2024-12-09 05:48:31.603962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.438 [2024-12-09 05:48:31.604040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.604068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.608839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.608960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.608987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.615024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.615204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.620574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.620694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.620721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.625493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.625642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.625671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.630853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.630979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.631007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.636673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.636744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.636772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.642781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.642877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.642905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.647615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.647710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.647738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.652459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.652538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.652565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.439 [2024-12-09 05:48:31.657799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.439 [2024-12-09 05:48:31.657913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.439 [2024-12-09 05:48:31.657940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.664156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.664351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.671019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.671168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.671196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.676623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.676731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.676759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.682145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.682290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.682319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.686996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.687133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.687161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.691911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.692007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.692034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.696847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.696951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.696979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.701839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.701934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.701962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 5326.00 IOPS, 665.75 MiB/s [2024-12-09T04:48:31.922Z] [2024-12-09 05:48:31.707754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.707866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.707894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.712856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.712959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.712987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.717944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.718043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.723080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.723151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.723179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.728828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.728934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.728963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.735999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.736200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.736228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.742658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.742777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.749593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.749698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.755363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.755442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.755470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.761070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.761146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.761173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.766567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.766644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.766672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.772119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.772205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.697 [2024-12-09 05:48:31.772233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.697 [2024-12-09 05:48:31.777570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.697 [2024-12-09 05:48:31.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.777671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.783181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.783289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.788688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.788793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.788820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.794306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.794382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.794409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.799893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.799963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.799990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.805454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.805540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.805568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.811071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.811176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.811203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.816926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.817113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.817140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.823186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.823311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.828536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.828647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.834108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.834183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.834211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.839241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.839405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.839433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.844733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.844812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.849589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.849703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.849731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.854731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.854934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.854962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.861048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.861249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.861284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.867376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.867553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.867581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.873555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.873737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.873765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.879926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.880100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.880129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.886312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.886477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.886505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.892650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.892861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.892888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.898874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.899056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.899084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.905224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.905412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.905440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.911384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.911567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.911595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.698 [2024-12-09 05:48:31.917419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.698 [2024-12-09 05:48:31.917525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.698 [2024-12-09 05:48:31.917553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.956 [2024-12-09 05:48:31.922308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.956 [2024-12-09 05:48:31.922420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.956 [2024-12-09 05:48:31.922454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.956 [2024-12-09 05:48:31.927135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.956 [2024-12-09 05:48:31.927261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.956 [2024-12-09 05:48:31.927297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.956 [2024-12-09 05:48:31.932476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.932617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.932645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.938524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.938670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.938697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.944457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.944562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.944590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.950009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.950098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.950125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.955559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.955640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.960758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.960835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.960863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.966393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.966497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.971417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.971498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.971526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.976263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.976358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.976386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.981155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.981225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.981252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.985940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.986020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.986048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.990978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.991054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.991082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:31.995963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:31.996042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:31.996069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.000852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.000934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.000961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.005688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.005769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.005796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.010793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.010871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.010899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.015841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.015975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.020886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.020977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.021004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.025870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.025956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.025984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.030815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.030889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.030916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.035891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.035967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.957 [2024-12-09 05:48:32.035995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.957 [2024-12-09 05:48:32.041498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.957 [2024-12-09 05:48:32.041573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.041601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.046431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.046508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.051263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.051356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.051383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.056055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.056139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.056172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.060824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.060895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.060923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.065807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.065890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.065918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.070937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.071033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.075999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.076074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.076101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.081120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.081211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.081239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.085914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.086014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.086042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.091003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.091095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.091123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.095833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.095925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.095953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.100679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.100773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.100801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.105532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.105624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.105652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.110800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.110940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.115769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.115862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.115890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.120693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.120789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.120817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.125717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.125808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.125835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.130546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.130633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.135461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.135553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.135581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.140384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.958 [2024-12-09 05:48:32.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.958 [2024-12-09 05:48:32.140547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.958 [2024-12-09 05:48:32.147298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.959 [2024-12-09 05:48:32.147492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.959 [2024-12-09 05:48:32.147530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:37.959 [2024-12-09 05:48:32.153944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.959 [2024-12-09 05:48:32.154143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.959 [2024-12-09 05:48:32.154173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:37.959 [2024-12-09 05:48:32.160983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.959 [2024-12-09 05:48:32.161157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.959 [2024-12-09 05:48:32.161187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:37.959 [2024-12-09 05:48:32.168571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.959 [2024-12-09 05:48:32.168742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.959 [2024-12-09 05:48:32.168772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:37.959 [2024-12-09 05:48:32.175072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:37.959 [2024-12-09 05:48:32.175256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:37.959 [2024-12-09 05:48:32.175295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.181404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.181607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.181638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.187965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.188153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.188183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.194237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.194426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.194456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.200497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.200648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.200684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.207587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.207754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.207784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.214346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.214535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.214565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.221664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.221849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.221879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.229034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.229172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.236536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.236718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.236747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.243668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.243739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.243767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.249377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.249455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.249482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.254601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.254722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.254752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.259519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.259624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.217 [2024-12-09 05:48:32.259652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.217 [2024-12-09 05:48:32.264429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.217 [2024-12-09 05:48:32.264508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.264535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.269298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.269372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.269399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.274156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.274236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.274263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.279013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.279128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.283802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.283873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.288611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.288704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.288731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.293471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.293564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.293590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.298259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.298361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.298388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.303267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.303382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.308241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.308344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.308375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.313153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.313239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.313266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.317988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.318089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.323265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.323356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.323383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.328420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.328501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.328528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.333308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.333399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.333426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.338172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.338246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.338284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.343156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.343230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.343264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.348537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.348624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.348651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.355397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.355507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.355536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.360870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.361016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.361046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.366494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.366600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.366627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.371761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.376931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.377033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.377061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.381859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.381999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.382029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.386967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.387097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.387126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.391989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.392092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.392119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.397067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.397171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.218 [2024-12-09 05:48:32.397201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.218 [2024-12-09 05:48:32.401996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.218 [2024-12-09 05:48:32.402092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.402119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.408156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.408234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.413657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.413749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.413776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.419162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.419297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.419327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.424683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.424759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.424786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.430157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.430238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.430265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.219 [2024-12-09 05:48:32.435582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.219 [2024-12-09 05:48:32.435688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.219 [2024-12-09 05:48:32.435717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.441262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.441390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.441420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.446780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.446889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.446916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.452485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.452562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.452589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.458897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.458969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.458996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.464679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.464811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.464841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.470352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.470423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.470450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.476027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.476103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.476 [2024-12-09 05:48:32.476130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.476 [2024-12-09 05:48:32.482232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.476 [2024-12-09 05:48:32.482328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.482356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.487404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.487491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.487524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.492345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.492456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.497255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.497371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.497398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.502251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.502366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.502393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.507046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.507211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.512364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.512525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.512565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.518652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.518825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.518855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.524111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.524212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.524240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.531266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.531458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.537420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.537549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.537589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.542550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.542683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.547953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.548044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.548072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.553950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.554039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.554067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.559765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.559959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.559988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.566157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.566279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.566307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.571607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.571709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.571738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.576548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.576667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.576695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.581692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.581848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.581877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.586663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.586763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.586791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.591622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.591767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.591796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.596569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.596679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.596708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.601396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.601551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.601591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.606232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.606394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.606424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.611137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.611253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.611302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.616316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.616467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.616497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.621557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.621707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.477 [2024-12-09 05:48:32.621736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.477 [2024-12-09 05:48:32.626795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.477 [2024-12-09 05:48:32.626894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.631708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.631794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.631820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.636520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.636610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.636636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.641518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.641597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.641631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.646485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.646637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.646666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.652369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.652525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.652554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.658688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.658875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.658904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.664677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.664848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.664877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.670783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.670936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.670965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.676718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.676867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.676902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.681375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.681457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.685961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.686116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.686146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.690686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.690800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.690829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:53:38.478 [2024-12-09 05:48:32.695349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.478 [2024-12-09 05:48:32.695494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.478 [2024-12-09 05:48:32.695523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:53:38.735 [2024-12-09 05:48:32.700387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.735 [2024-12-09 05:48:32.700541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.735 [2024-12-09 05:48:32.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:53:38.735 5492.00 IOPS, 686.50 MiB/s [2024-12-09T04:48:32.960Z] [2024-12-09 05:48:32.708136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x78d090) with pdu=0x200016efef90 00:53:38.735 [2024-12-09 05:48:32.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:38.735 [2024-12-09 05:48:32.708374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:53:38.735 00:53:38.735 Latency(us) 00:53:38.735 [2024-12-09T04:48:32.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:38.735 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:53:38.735 nvme0n1 : 2.00 5488.59 686.07 0.00 0.00 2907.62 1978.22 7718.68 00:53:38.735 [2024-12-09T04:48:32.960Z] =================================================================================================================== 00:53:38.735 [2024-12-09T04:48:32.960Z] Total : 5488.59 686.07 0.00 0.00 2907.62 1978.22 7718.68 00:53:38.735 { 00:53:38.735 "results": [ 00:53:38.735 { 00:53:38.735 "job": "nvme0n1", 00:53:38.735 "core_mask": "0x2", 00:53:38.735 "workload": "randwrite", 00:53:38.735 "status": "finished", 00:53:38.735 "queue_depth": 16, 00:53:38.735 "io_size": 131072, 00:53:38.735 "runtime": 2.004706, 00:53:38.735 "iops": 5488.585358651094, 00:53:38.735 "mibps": 686.0731698313867, 00:53:38.735 "io_failed": 0, 00:53:38.735 "io_timeout": 0, 00:53:38.735 "avg_latency_us": 2907.6196684405936, 00:53:38.735 "min_latency_us": 1978.2162962962964, 00:53:38.735 "max_latency_us": 7718.684444444444 00:53:38.735 } 00:53:38.735 ], 00:53:38.735 "core_count": 1 00:53:38.735 } 00:53:38.735 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:53:38.735 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:53:38.735 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:53:38.735 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:53:38.735 | .driver_specific 00:53:38.735 | .nvme_error 00:53:38.735 | .status_code 00:53:38.735 | .command_transient_transport_error' 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 756780 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 756780 ']' 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 756780 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:38.991 05:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756780 00:53:38.991 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:38.991 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:38.991 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756780' 00:53:38.991 killing process with pid 756780 00:53:38.991 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 756780 00:53:38.991 Received shutdown signal, test time was about 2.000000 seconds 00:53:38.991 00:53:38.991 Latency(us) 00:53:38.991 [2024-12-09T04:48:33.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:38.991 [2024-12-09T04:48:33.216Z] =================================================================================================================== 00:53:38.991 [2024-12-09T04:48:33.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:38.991 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 756780 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 755408 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 755408 ']' 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 755408 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755408 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755408' 00:53:39.249 killing process with pid 755408 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 755408 00:53:39.249 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 755408 00:53:39.507 00:53:39.507 real 0m15.435s 00:53:39.507 user 0m30.898s 00:53:39.508 sys 0m4.211s 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:53:39.508 ************************************ 00:53:39.508 END TEST nvmf_digest_error 00:53:39.508 ************************************ 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:39.508 rmmod nvme_tcp 00:53:39.508 rmmod nvme_fabrics 00:53:39.508 rmmod nvme_keyring 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 755408 ']' 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 755408 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 755408 ']' 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 755408 00:53:39.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (755408) - No such process 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 755408 is not found' 00:53:39.508 Process with pid 755408 is not found 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:39.508 05:48:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:53:42.040 00:53:42.040 real 0m35.901s 00:53:42.040 user 1m2.802s 00:53:42.040 sys 0m10.376s 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:53:42.040 ************************************ 00:53:42.040 END TEST nvmf_digest 00:53:42.040 ************************************ 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:53:42.040 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:53:42.041 ************************************ 00:53:42.041 START TEST nvmf_bdevperf 00:53:42.041 ************************************ 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:53:42.041 * Looking for test storage... 00:53:42.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:53:42.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:42.041 --rc genhtml_branch_coverage=1 00:53:42.041 --rc genhtml_function_coverage=1 00:53:42.041 --rc genhtml_legend=1 00:53:42.041 --rc geninfo_all_blocks=1 00:53:42.041 --rc geninfo_unexecuted_blocks=1 00:53:42.041 00:53:42.041 ' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:53:42.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:42.041 --rc genhtml_branch_coverage=1 00:53:42.041 --rc genhtml_function_coverage=1 00:53:42.041 --rc genhtml_legend=1 00:53:42.041 --rc geninfo_all_blocks=1 00:53:42.041 --rc geninfo_unexecuted_blocks=1 00:53:42.041 00:53:42.041 ' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:53:42.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:42.041 --rc genhtml_branch_coverage=1 00:53:42.041 --rc genhtml_function_coverage=1 00:53:42.041 --rc genhtml_legend=1 00:53:42.041 --rc geninfo_all_blocks=1 00:53:42.041 --rc geninfo_unexecuted_blocks=1 00:53:42.041 00:53:42.041 ' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:53:42.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:42.041 --rc genhtml_branch_coverage=1 00:53:42.041 --rc genhtml_function_coverage=1 00:53:42.041 --rc genhtml_legend=1 00:53:42.041 --rc geninfo_all_blocks=1 00:53:42.041 --rc geninfo_unexecuted_blocks=1 00:53:42.041 00:53:42.041 ' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:42.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:42.041 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:53:42.042 05:48:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:53:43.942 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:53:43.942 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:53:43.942 05:48:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:53:43.942 Found net devices under 0000:0a:00.0: cvl_0_0 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:53:43.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:53:43.942 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:53:44.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:44.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:53:44.201 00:53:44.201 --- 10.0.0.2 ping statistics --- 00:53:44.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.201 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:53:44.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:44.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:53:44.201 00:53:44.201 --- 10.0.0.1 ping statistics --- 00:53:44.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:44.201 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=759258 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 759258 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 759258 ']' 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:44.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:44.201 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.201 [2024-12-09 05:48:38.309690] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:44.201 [2024-12-09 05:48:38.309776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:44.201 [2024-12-09 05:48:38.382872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:44.459 [2024-12-09 05:48:38.441575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:44.459 [2024-12-09 05:48:38.441622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:44.459 [2024-12-09 05:48:38.441641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:44.459 [2024-12-09 05:48:38.441652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:44.459 [2024-12-09 05:48:38.441661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:44.459 [2024-12-09 05:48:38.443040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:44.459 [2024-12-09 05:48:38.443064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:44.459 [2024-12-09 05:48:38.443067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 [2024-12-09 05:48:38.577460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 Malloc0 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:44.459 [2024-12-09 05:48:38.637283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:53:44.459 { 00:53:44.459 "params": { 00:53:44.459 "name": "Nvme$subsystem", 00:53:44.459 "trtype": "$TEST_TRANSPORT", 00:53:44.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:53:44.459 "adrfam": "ipv4", 00:53:44.459 "trsvcid": "$NVMF_PORT", 00:53:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:53:44.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:53:44.459 "hdgst": ${hdgst:-false}, 00:53:44.459 "ddgst": ${ddgst:-false} 00:53:44.459 }, 00:53:44.459 "method": "bdev_nvme_attach_controller" 00:53:44.459 } 00:53:44.459 EOF 00:53:44.459 )") 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:53:44.459 05:48:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:53:44.459 "params": { 00:53:44.459 "name": "Nvme1", 00:53:44.459 "trtype": "tcp", 00:53:44.459 "traddr": "10.0.0.2", 00:53:44.459 "adrfam": "ipv4", 00:53:44.459 "trsvcid": "4420", 00:53:44.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:44.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:53:44.459 "hdgst": false, 00:53:44.459 "ddgst": false 00:53:44.459 }, 00:53:44.459 "method": "bdev_nvme_attach_controller" 00:53:44.459 }' 00:53:44.716 [2024-12-09 05:48:38.685088] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:44.717 [2024-12-09 05:48:38.685186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759291 ] 00:53:44.717 [2024-12-09 05:48:38.753318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:44.717 [2024-12-09 05:48:38.813949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:44.974 Running I/O for 1 seconds... 00:53:46.344 8428.00 IOPS, 32.92 MiB/s 00:53:46.344 Latency(us) 00:53:46.344 [2024-12-09T04:48:40.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:46.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:53:46.344 Verification LBA range: start 0x0 length 0x4000 00:53:46.344 Nvme1n1 : 1.02 8420.62 32.89 0.00 0.00 15142.53 2803.48 13689.74 00:53:46.344 [2024-12-09T04:48:40.569Z] =================================================================================================================== 00:53:46.344 [2024-12-09T04:48:40.569Z] Total : 8420.62 32.89 0.00 0.00 15142.53 2803.48 13689.74 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=759545 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:53:46.344 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:53:46.344 { 00:53:46.344 "params": { 00:53:46.344 "name": "Nvme$subsystem", 00:53:46.344 "trtype": "$TEST_TRANSPORT", 00:53:46.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:53:46.344 "adrfam": "ipv4", 00:53:46.344 "trsvcid": "$NVMF_PORT", 00:53:46.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:53:46.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:53:46.344 "hdgst": ${hdgst:-false}, 00:53:46.344 "ddgst": ${ddgst:-false} 00:53:46.344 }, 00:53:46.344 "method": "bdev_nvme_attach_controller" 00:53:46.344 } 00:53:46.345 EOF 00:53:46.345 )") 00:53:46.345 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:53:46.345 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:53:46.345 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:53:46.345 05:48:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:53:46.345 "params": { 00:53:46.345 "name": "Nvme1", 00:53:46.345 "trtype": "tcp", 00:53:46.345 "traddr": "10.0.0.2", 00:53:46.345 "adrfam": "ipv4", 00:53:46.345 "trsvcid": "4420", 00:53:46.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:53:46.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:53:46.345 "hdgst": false, 00:53:46.345 "ddgst": false 00:53:46.345 }, 00:53:46.345 "method": "bdev_nvme_attach_controller" 00:53:46.345 }' 00:53:46.345 [2024-12-09 05:48:40.483416] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:46.345 [2024-12-09 05:48:40.483490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759545 ] 00:53:46.345 [2024-12-09 05:48:40.553623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:46.602 [2024-12-09 05:48:40.612642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:46.602 Running I/O for 15 seconds... 00:53:48.986 8320.00 IOPS, 32.50 MiB/s [2024-12-09T04:48:43.474Z] 8448.50 IOPS, 33.00 MiB/s [2024-12-09T04:48:43.474Z] 05:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 759258 00:53:49.249 05:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:53:49.249 [2024-12-09 05:48:43.452964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.249 [2024-12-09 05:48:43.453873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.249 [2024-12-09 05:48:43.453886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.453899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.453912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.453925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.453938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.453951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.453965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.453977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.453991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.454980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.454993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.455006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.455032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.455044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.250 [2024-12-09 05:48:43.455058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.250 [2024-12-09 05:48:43.455071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:49.251 [2024-12-09 05:48:43.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:49.251 [2024-12-09 05:48:43.455897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.455976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.455991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.251 [2024-12-09 05:48:43.456247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.251 [2024-12-09 05:48:43.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.456984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.456996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.457021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:49.252 [2024-12-09 05:48:43.457052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24856b0 is same with the state(6) to be set 00:53:49.252 [2024-12-09 05:48:43.457081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:49.252 [2024-12-09 05:48:43.457091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:49.252 [2024-12-09 05:48:43.457101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48544 len:8 PRP1 0x0 PRP2 0x0 00:53:49.252 [2024-12-09 05:48:43.457113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:53:49.252 [2024-12-09 05:48:43.457280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:53:49.252 [2024-12-09 05:48:43.457338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:53:49.252 [2024-12-09 05:48:43.457371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:53:49.252 [2024-12-09 05:48:43.457399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:49.252 [2024-12-09 05:48:43.457412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.252 [2024-12-09 05:48:43.460781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.252 [2024-12-09 05:48:43.460816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.252 [2024-12-09 05:48:43.461389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.252 [2024-12-09 05:48:43.461419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.252 [2024-12-09 05:48:43.461437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.252 [2024-12-09 05:48:43.461679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.252 [2024-12-09 05:48:43.461883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.252 [2024-12-09 05:48:43.461901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.252 [2024-12-09 05:48:43.461915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.252 [2024-12-09 05:48:43.461931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.511 [2024-12-09 05:48:43.474505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.511 [2024-12-09 05:48:43.474935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.511 [2024-12-09 05:48:43.474965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.511 [2024-12-09 05:48:43.474981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.511 [2024-12-09 05:48:43.475218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.511 [2024-12-09 05:48:43.475457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.511 [2024-12-09 05:48:43.475480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.511 [2024-12-09 05:48:43.475493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.511 [2024-12-09 05:48:43.475506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.511 [2024-12-09 05:48:43.487497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.511 [2024-12-09 05:48:43.487845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.511 [2024-12-09 05:48:43.487874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.511 [2024-12-09 05:48:43.487890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.511 [2024-12-09 05:48:43.488126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.511 [2024-12-09 05:48:43.488374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.511 [2024-12-09 05:48:43.488401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.511 [2024-12-09 05:48:43.488417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.511 [2024-12-09 05:48:43.488430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.511 [2024-12-09 05:48:43.500591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.511 [2024-12-09 05:48:43.500983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.511 [2024-12-09 05:48:43.501011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.511 [2024-12-09 05:48:43.501027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.511 [2024-12-09 05:48:43.501245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.511 [2024-12-09 05:48:43.501484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.511 [2024-12-09 05:48:43.501505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.511 [2024-12-09 05:48:43.501518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.511 [2024-12-09 05:48:43.501531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.511 [2024-12-09 05:48:43.513941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.511 [2024-12-09 05:48:43.514348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.511 [2024-12-09 05:48:43.514376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.511 [2024-12-09 05:48:43.514393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.511 [2024-12-09 05:48:43.514624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.511 [2024-12-09 05:48:43.514814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.514833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.514846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.514858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.527318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.527734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.527765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.527782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.528021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.528233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.528268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.528294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.528339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.540680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.541033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.541061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.541077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.541324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.541552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.541576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.541590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.541604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.553988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.554347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.554393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.554628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.554824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.554845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.554859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.554871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.567193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.567580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.567624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.567641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.567878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.568087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.568108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.568121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.568134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.580493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.580874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.580904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.580920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.581157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.581420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.581444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.581459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.581473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.593757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.594111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.594141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.594157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.594400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.594639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.594660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.594673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.594685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.607082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.607421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.512 [2024-12-09 05:48:43.607450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.512 [2024-12-09 05:48:43.607468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.512 [2024-12-09 05:48:43.607677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.512 [2024-12-09 05:48:43.607902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.512 [2024-12-09 05:48:43.607923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.512 [2024-12-09 05:48:43.607937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.512 [2024-12-09 05:48:43.607950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.512 [2024-12-09 05:48:43.620352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.512 [2024-12-09 05:48:43.620687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.620731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.620747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.620973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.621184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.621205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.621218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.621231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.633636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.633992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.634021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.634037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.634286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.634509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.634529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.634543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.634556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.646880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.647360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.647390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.647407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.647660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.647870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.647891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.647904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.647917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.660186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.660608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.660637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.660653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.660889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.661089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.661116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.661145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.661159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.673471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.673875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.673904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.673921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.674145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.674403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.674426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.674441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.674454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.686789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.687173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.687201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.687217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.687486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.687720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.687741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.687754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.687766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.700163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.700570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.700613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.700630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.700850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.701060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.701080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.701093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.701111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.713539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.713845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.513 [2024-12-09 05:48:43.713888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.513 [2024-12-09 05:48:43.713905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.513 [2024-12-09 05:48:43.714122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.513 [2024-12-09 05:48:43.714361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.513 [2024-12-09 05:48:43.714383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.513 [2024-12-09 05:48:43.714396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.513 [2024-12-09 05:48:43.714410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.513 [2024-12-09 05:48:43.727194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.513 [2024-12-09 05:48:43.727567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.514 [2024-12-09 05:48:43.727598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.514 [2024-12-09 05:48:43.727615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.514 [2024-12-09 05:48:43.727847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.514 [2024-12-09 05:48:43.728064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.514 [2024-12-09 05:48:43.728085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.514 [2024-12-09 05:48:43.728099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.514 [2024-12-09 05:48:43.728126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.740937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.741346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.741376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.741394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.741627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.741862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.741884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.741898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.741911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.754227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.754613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.754659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.754903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.755098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.755118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.755131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.755144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.767502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.767874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.767903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.767920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.768166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.768403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.768425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.768438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.768451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.780846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.781197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.781226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.781242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.781507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.781727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.781748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.781761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.781773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.794112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.794473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.794490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.794728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.794938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.794958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.794971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.794983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 7442.33 IOPS, 29.07 MiB/s [2024-12-09T04:48:43.998Z] [2024-12-09 05:48:43.808934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.809244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.809296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.809316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.809545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.809773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.809793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.809807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.809820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.822331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.822799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.822828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.822845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.823100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.823358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.823392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.823407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.823421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.835658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.836028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.773 [2024-12-09 05:48:43.836071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.773 [2024-12-09 05:48:43.836301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.773 [2024-12-09 05:48:43.836519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.773 [2024-12-09 05:48:43.836544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.773 [2024-12-09 05:48:43.836558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.773 [2024-12-09 05:48:43.836585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.773 [2024-12-09 05:48:43.849020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.773 [2024-12-09 05:48:43.849403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.773 [2024-12-09 05:48:43.849432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.849450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.849695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.849891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.849911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.849924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.849936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.862279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.862658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.862674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.862898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.863107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.863128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.863141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.863154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.875610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.875981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.876010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.876027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.876270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.876511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.876543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.876558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.876579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.888916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.889282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.889312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.889329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.889560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.889777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.889799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.889812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.889824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.902173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.902564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.902593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.902609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.902844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.903039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.903059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.903073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.903085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.915386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.915758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.915789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.915806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.916048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.916282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.916305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.916335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.916349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.928681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.928980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.929023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.929040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.929258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.929468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.929488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.929502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.929516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.941824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.942177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.942206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.942222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.942489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.942705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.942725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.942739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.942751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.955027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.955445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.955475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.955492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.955737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.955948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.955968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.955981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.955993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.968372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.968765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.774 [2024-12-09 05:48:43.968811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.774 [2024-12-09 05:48:43.969039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.774 [2024-12-09 05:48:43.969250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.774 [2024-12-09 05:48:43.969278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.774 [2024-12-09 05:48:43.969309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.774 [2024-12-09 05:48:43.969326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.774 [2024-12-09 05:48:43.981784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.774 [2024-12-09 05:48:43.982106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.774 [2024-12-09 05:48:43.982135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.775 [2024-12-09 05:48:43.982151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.775 [2024-12-09 05:48:43.982407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.775 [2024-12-09 05:48:43.982658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.775 [2024-12-09 05:48:43.982679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.775 [2024-12-09 05:48:43.982692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.775 [2024-12-09 05:48:43.982704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:49.775 [2024-12-09 05:48:43.995434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:49.775 [2024-12-09 05:48:43.995793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:49.775 [2024-12-09 05:48:43.995822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:49.775 [2024-12-09 05:48:43.995853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:49.775 [2024-12-09 05:48:43.996099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:49.775 [2024-12-09 05:48:43.996333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:49.775 [2024-12-09 05:48:43.996356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:49.775 [2024-12-09 05:48:43.996371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:49.775 [2024-12-09 05:48:43.996388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.008835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.009189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.009218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.009235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.009482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.009695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.009720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.009734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.009746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.022080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.022431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.022461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.022478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.022733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.022927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.022947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.022960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.022972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.035443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.035793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.035821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.035837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.036053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.036265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.036311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.036326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.036339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.048681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.049066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.049111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.049362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.049601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.049622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.049636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.049653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.062000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.062352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.062391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.062408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.062651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.062844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.062865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.062878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.062890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.075244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.075615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.075643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.075659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.075879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.076089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.076110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.076123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.076136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.088488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.088855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.088885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.088903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.089145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.089401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.089423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.089436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.034 [2024-12-09 05:48:44.089449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.034 [2024-12-09 05:48:44.101758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.034 [2024-12-09 05:48:44.102120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.034 [2024-12-09 05:48:44.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.034 [2024-12-09 05:48:44.102166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.034 [2024-12-09 05:48:44.102419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.034 [2024-12-09 05:48:44.102653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.034 [2024-12-09 05:48:44.102673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.034 [2024-12-09 05:48:44.102686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.102699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.114932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.115317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.115346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.115362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.115586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.115794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.115815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.115828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.115841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.128229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.128577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.128606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.128623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.128846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.129055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.129075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.129088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.129101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.141478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.141921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.141950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.141968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.142216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.142461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.142484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.142498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.142512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.154805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.155157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.155185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.155200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.155449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.155671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.155692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.155720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.155733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.168123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.168471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.168500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.168517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.168769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.168963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.168984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.168997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.169009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.181343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.181721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.181766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.182003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.182212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.182237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.182252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.182265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.194547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.194885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.194913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.194930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.195154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.195410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.195432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.195445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.195459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.207766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.208185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.208214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.208231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.208472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.208706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.208727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.208739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.208752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.221074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.221518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.221591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.221851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.222045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.222065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.222078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.035 [2024-12-09 05:48:44.222096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.035 [2024-12-09 05:48:44.234715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.035 [2024-12-09 05:48:44.235067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.035 [2024-12-09 05:48:44.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.035 [2024-12-09 05:48:44.235114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.035 [2024-12-09 05:48:44.235363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.035 [2024-12-09 05:48:44.235563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.035 [2024-12-09 05:48:44.235585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.035 [2024-12-09 05:48:44.235613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.036 [2024-12-09 05:48:44.235627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.036 [2024-12-09 05:48:44.248021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.036 [2024-12-09 05:48:44.248385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.036 [2024-12-09 05:48:44.248414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.036 [2024-12-09 05:48:44.248430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.036 [2024-12-09 05:48:44.248653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.036 [2024-12-09 05:48:44.248862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.036 [2024-12-09 05:48:44.248883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.036 [2024-12-09 05:48:44.248896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.036 [2024-12-09 05:48:44.248908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.261415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.261800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.261828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.261845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.262089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.262337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.295 [2024-12-09 05:48:44.262375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.295 [2024-12-09 05:48:44.262390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.295 [2024-12-09 05:48:44.262404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.274637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.275026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.275056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.275072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.275307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.275513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.295 [2024-12-09 05:48:44.275533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.295 [2024-12-09 05:48:44.275547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.295 [2024-12-09 05:48:44.275561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.287884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.288281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.288310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.288327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.288564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.288759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.295 [2024-12-09 05:48:44.288780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.295 [2024-12-09 05:48:44.288793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.295 [2024-12-09 05:48:44.288805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.301091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.301534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.301578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.301595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.301835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.302047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.295 [2024-12-09 05:48:44.302068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.295 [2024-12-09 05:48:44.302081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.295 [2024-12-09 05:48:44.302093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.314381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.314795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.314823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.314839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.315067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.315303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.295 [2024-12-09 05:48:44.315340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.295 [2024-12-09 05:48:44.315354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.295 [2024-12-09 05:48:44.315368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.295 [2024-12-09 05:48:44.327751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.295 [2024-12-09 05:48:44.328104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.295 [2024-12-09 05:48:44.328133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.295 [2024-12-09 05:48:44.328150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.295 [2024-12-09 05:48:44.328424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.295 [2024-12-09 05:48:44.328664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.328685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.328698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.328710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.340993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.341346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.341377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.341395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.341641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.341853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.341874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.341887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.341899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.354200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.354546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.354591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.354607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.354825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.355035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.355061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.355075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.355088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.367426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.367819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.367848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.367865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.368102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.368340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.368377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.368392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.368405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.380703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.381057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.381086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.381103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.381361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.381583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.381604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.381618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.381646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.394044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.394429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.394459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.394475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.394719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.394928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.394949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.394962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.394979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.407590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.407973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.408021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.408267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.408512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.408537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.408552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.408566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.420876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.421233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.421264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.421290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.421523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.421752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.421774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.421787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.421799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.434086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.434462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.434492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.434508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.434773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.434962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.434981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.434994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.435006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.447338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.447794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.447824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.447840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.448078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.448311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.296 [2024-12-09 05:48:44.448333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.296 [2024-12-09 05:48:44.448346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.296 [2024-12-09 05:48:44.448360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.296 [2024-12-09 05:48:44.460419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.296 [2024-12-09 05:48:44.460765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.296 [2024-12-09 05:48:44.460794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.296 [2024-12-09 05:48:44.460811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.296 [2024-12-09 05:48:44.461047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.296 [2024-12-09 05:48:44.461251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.297 [2024-12-09 05:48:44.461281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.297 [2024-12-09 05:48:44.461313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.297 [2024-12-09 05:48:44.461326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.297 [2024-12-09 05:48:44.473536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.297 [2024-12-09 05:48:44.473853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.297 [2024-12-09 05:48:44.473881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.297 [2024-12-09 05:48:44.473897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.297 [2024-12-09 05:48:44.474115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.297 [2024-12-09 05:48:44.474365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.297 [2024-12-09 05:48:44.474388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.297 [2024-12-09 05:48:44.474403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.297 [2024-12-09 05:48:44.474416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.297 [2024-12-09 05:48:44.487011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.297 [2024-12-09 05:48:44.487428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.297 [2024-12-09 05:48:44.487458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.297 [2024-12-09 05:48:44.487474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.297 [2024-12-09 05:48:44.487718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.297 [2024-12-09 05:48:44.487924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.297 [2024-12-09 05:48:44.487945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.297 [2024-12-09 05:48:44.487958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.297 [2024-12-09 05:48:44.487970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.297 [2024-12-09 05:48:44.500050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.297 [2024-12-09 05:48:44.500403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.297 [2024-12-09 05:48:44.500432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.297 [2024-12-09 05:48:44.500449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.297 [2024-12-09 05:48:44.500684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.297 [2024-12-09 05:48:44.500897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.297 [2024-12-09 05:48:44.500918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.297 [2024-12-09 05:48:44.500931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.297 [2024-12-09 05:48:44.500944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.297 [2024-12-09 05:48:44.513131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.297 [2024-12-09 05:48:44.513547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.297 [2024-12-09 05:48:44.513577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.297 [2024-12-09 05:48:44.513593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.297 [2024-12-09 05:48:44.513828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.297 [2024-12-09 05:48:44.514067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.297 [2024-12-09 05:48:44.514103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.297 [2024-12-09 05:48:44.514117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.297 [2024-12-09 05:48:44.514130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.555 [2024-12-09 05:48:44.526532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.555 [2024-12-09 05:48:44.526834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.555 [2024-12-09 05:48:44.526862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.526878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.527075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.527343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.527374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.527390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.527404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.539574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.539950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.539979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.539995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.540215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.540458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.540480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.540494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.540506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.552660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.553016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.553045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.553061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.553312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.553506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.553527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.553541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.553554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.565747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.566120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.566148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.566164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.566393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.566616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.566636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.566649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.566666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.578890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.579307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.579351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.579368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.579603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.579807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.579828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.579840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.579852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.592008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.592320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.592348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.592365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.592583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.592788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.592808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.592820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.592831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.605027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.605436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.605465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.605482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.605719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.605923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.605943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.605956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.605968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.618351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.618733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.618761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.618777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.618995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.619200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.619219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.619231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.619244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.631401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.631779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.631807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.631823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.632040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.632246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.632292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.632307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.632336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.644526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.644871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.644900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.644916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.645152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.645406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.645429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.645443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.645456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.657629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.657984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.658012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.658029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.658270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.658497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.658517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.658531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.658543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.670679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.670979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.671021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.671037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.671248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.671472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.671495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.671508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.671521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.683736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.684048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.684119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.684135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.684381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.684612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.684631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.684645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.684657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.696951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.697359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.697388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.697404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.697642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.697865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.697890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.697903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.697916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.710196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.710526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.710555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.710571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.710789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.710994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.711015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.711027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.711039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.723366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.723768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.723797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.723813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.724050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.724296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.724328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.724341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.724353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.736459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.736803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.736832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.736849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.737085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.737315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.737340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.737353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.556 [2024-12-09 05:48:44.737370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.556 [2024-12-09 05:48:44.749673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.556 [2024-12-09 05:48:44.750067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.556 [2024-12-09 05:48:44.750117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.556 [2024-12-09 05:48:44.750133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.556 [2024-12-09 05:48:44.750376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.556 [2024-12-09 05:48:44.750576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.556 [2024-12-09 05:48:44.750613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.556 [2024-12-09 05:48:44.750626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.557 [2024-12-09 05:48:44.750639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.557 [2024-12-09 05:48:44.762772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.557 [2024-12-09 05:48:44.763129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.557 [2024-12-09 05:48:44.763158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.557 [2024-12-09 05:48:44.763174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.557 [2024-12-09 05:48:44.763442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.557 [2024-12-09 05:48:44.763671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.557 [2024-12-09 05:48:44.763692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.557 [2024-12-09 05:48:44.763704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.557 [2024-12-09 05:48:44.763717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.557 [2024-12-09 05:48:44.776162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.557 [2024-12-09 05:48:44.776622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.557 [2024-12-09 05:48:44.776651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.557 [2024-12-09 05:48:44.776668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.557 [2024-12-09 05:48:44.776907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.557 [2024-12-09 05:48:44.777142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.557 [2024-12-09 05:48:44.777163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.557 [2024-12-09 05:48:44.777177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.557 [2024-12-09 05:48:44.777204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.815 [2024-12-09 05:48:44.789313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.815 [2024-12-09 05:48:44.789665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.815 [2024-12-09 05:48:44.789695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.815 [2024-12-09 05:48:44.789712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.815 [2024-12-09 05:48:44.789952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.815 [2024-12-09 05:48:44.790155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.815 [2024-12-09 05:48:44.790176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.815 [2024-12-09 05:48:44.790189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.815 [2024-12-09 05:48:44.790201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.815 [2024-12-09 05:48:44.802530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.815 [2024-12-09 05:48:44.802888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.815 [2024-12-09 05:48:44.802916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.815 [2024-12-09 05:48:44.802932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.815 [2024-12-09 05:48:44.803168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.815 [2024-12-09 05:48:44.803404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.803426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.803439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.803452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 5581.75 IOPS, 21.80 MiB/s [2024-12-09T04:48:45.041Z] [2024-12-09 05:48:44.815656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.816068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.816084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.816316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.816516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.816537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.816550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.816578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.828822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.829197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.829226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.829248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.829498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.829737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.829757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.829770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.829782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.842026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.842400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.842428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.842444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.842662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.842868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.842888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.842901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.842913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.855193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.855596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.855640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.855656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.855905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.856093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.856112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.856124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.856136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.868518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.868990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.869043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.869060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.869313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.869525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.869564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.869578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.869590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.881960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.882417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.882469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.882486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.882730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.882919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.882939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.882952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.882965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.895194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.895621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.816 [2024-12-09 05:48:44.895651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.816 [2024-12-09 05:48:44.895668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.816 [2024-12-09 05:48:44.895910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.816 [2024-12-09 05:48:44.896121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.816 [2024-12-09 05:48:44.896141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.816 [2024-12-09 05:48:44.896153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.816 [2024-12-09 05:48:44.896165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.816 [2024-12-09 05:48:44.908407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.816 [2024-12-09 05:48:44.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.908861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.908878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.909129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.909350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.909371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.909384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.909401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.921643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.922054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.922107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.922124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.922392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.922599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.922620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.922634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.922661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.934793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.935142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.935171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.935187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.935435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.935647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.935666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.935679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.935691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.947981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.948388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.948417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.948434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.948670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.948875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.948895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.948907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.948919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.961211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.961592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.961636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.961652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.961886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.962090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.962109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.962122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.962134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.974347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.974715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.974782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.974798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.975029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.975268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.975314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.975327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.975341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:44.987910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:44.988344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:44.988373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:44.988390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:44.988606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:44.988854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:44.988875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:44.988889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.817 [2024-12-09 05:48:44.988902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.817 [2024-12-09 05:48:45.001305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.817 [2024-12-09 05:48:45.001794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.817 [2024-12-09 05:48:45.001848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.817 [2024-12-09 05:48:45.001869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.817 [2024-12-09 05:48:45.002115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.817 [2024-12-09 05:48:45.002337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.817 [2024-12-09 05:48:45.002360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.817 [2024-12-09 05:48:45.002374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.818 [2024-12-09 05:48:45.002388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.818 [2024-12-09 05:48:45.014685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.818 [2024-12-09 05:48:45.015141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.818 [2024-12-09 05:48:45.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.818 [2024-12-09 05:48:45.015209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.818 [2024-12-09 05:48:45.015447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.818 [2024-12-09 05:48:45.015668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.818 [2024-12-09 05:48:45.015689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.818 [2024-12-09 05:48:45.015702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.818 [2024-12-09 05:48:45.015714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:50.818 [2024-12-09 05:48:45.028078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:50.818 [2024-12-09 05:48:45.028437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:50.818 [2024-12-09 05:48:45.028468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:50.818 [2024-12-09 05:48:45.028485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:50.818 [2024-12-09 05:48:45.028741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:50.818 [2024-12-09 05:48:45.028929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:50.818 [2024-12-09 05:48:45.028949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:50.818 [2024-12-09 05:48:45.028962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:50.818 [2024-12-09 05:48:45.028974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.041411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.041817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.041846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.041863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.076 [2024-12-09 05:48:45.042099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.076 [2024-12-09 05:48:45.042333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.076 [2024-12-09 05:48:45.042359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.076 [2024-12-09 05:48:45.042373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.076 [2024-12-09 05:48:45.042386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.054476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.054881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.054909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.054925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.076 [2024-12-09 05:48:45.055156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.076 [2024-12-09 05:48:45.055413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.076 [2024-12-09 05:48:45.055435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.076 [2024-12-09 05:48:45.055450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.076 [2024-12-09 05:48:45.055464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.067488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.067895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.067922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.067939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.076 [2024-12-09 05:48:45.068174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.076 [2024-12-09 05:48:45.068427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.076 [2024-12-09 05:48:45.068449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.076 [2024-12-09 05:48:45.068463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.076 [2024-12-09 05:48:45.068476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.080681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.081085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.081112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.081127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.076 [2024-12-09 05:48:45.081353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.076 [2024-12-09 05:48:45.081554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.076 [2024-12-09 05:48:45.081588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.076 [2024-12-09 05:48:45.081601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.076 [2024-12-09 05:48:45.081618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.093936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.094251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.094286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.094320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.076 [2024-12-09 05:48:45.094560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.076 [2024-12-09 05:48:45.094766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.076 [2024-12-09 05:48:45.094785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.076 [2024-12-09 05:48:45.094798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.076 [2024-12-09 05:48:45.094810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.076 [2024-12-09 05:48:45.107087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.076 [2024-12-09 05:48:45.107456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.076 [2024-12-09 05:48:45.107486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.076 [2024-12-09 05:48:45.107502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.107748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.107936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.107956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.107968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.107980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.120295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.120679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.120721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.120958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.121179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.121199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.121213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.121226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.133456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.133826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.133872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.134109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.134325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.134346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.134358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.134370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.146781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.147184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.147211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.147227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.147486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.147693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.147713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.147725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.147738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.160054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.160365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.160408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.160425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.160661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.160866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.160885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.160898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.160910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.173252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.173650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.173679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.173703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.173941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.174130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.174149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.174162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.174173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.186438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.186842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.186870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.186886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.187122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.187352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.187384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.187398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.187411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.199602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.199946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.199975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.199992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.200228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.200467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.200489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.200503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.200517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.212837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.213231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.213294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.213312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.213561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.213765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.213789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.213802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.213814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.226058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.226438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.226468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.226485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.226738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.226942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.226963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.226975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.226988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.239416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.239830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.239883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.239900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.240146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.240343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.240364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.240377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.240390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.252527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.252947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.252975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.252990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.253224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.253451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.253473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.253487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.253505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.265633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.265976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.266019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.266249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.266473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.266495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.266508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.266522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.278779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.279155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.279227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.279243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.077 [2024-12-09 05:48:45.279521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.077 [2024-12-09 05:48:45.279727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.077 [2024-12-09 05:48:45.279746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.077 [2024-12-09 05:48:45.279758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.077 [2024-12-09 05:48:45.279770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.077 [2024-12-09 05:48:45.291922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.077 [2024-12-09 05:48:45.292197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.077 [2024-12-09 05:48:45.292240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.077 [2024-12-09 05:48:45.292257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.078 [2024-12-09 05:48:45.292501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.078 [2024-12-09 05:48:45.292724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.078 [2024-12-09 05:48:45.292744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.078 [2024-12-09 05:48:45.292756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.078 [2024-12-09 05:48:45.292768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.335 [2024-12-09 05:48:45.305417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.335 [2024-12-09 05:48:45.305818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.335 [2024-12-09 05:48:45.305845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.335 [2024-12-09 05:48:45.305862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.335 [2024-12-09 05:48:45.306081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.335 [2024-12-09 05:48:45.306313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.335 [2024-12-09 05:48:45.306336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.335 [2024-12-09 05:48:45.306350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.335 [2024-12-09 05:48:45.306363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.335 [2024-12-09 05:48:45.318474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.335 [2024-12-09 05:48:45.318881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.335 [2024-12-09 05:48:45.318910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.335 [2024-12-09 05:48:45.318927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.335 [2024-12-09 05:48:45.319164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.335 [2024-12-09 05:48:45.319383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.335 [2024-12-09 05:48:45.319405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.335 [2024-12-09 05:48:45.319418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.335 [2024-12-09 05:48:45.319431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.335 [2024-12-09 05:48:45.331703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.335 [2024-12-09 05:48:45.332074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.335 [2024-12-09 05:48:45.332102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.335 [2024-12-09 05:48:45.332118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.335 [2024-12-09 05:48:45.332354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.335 [2024-12-09 05:48:45.332564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.335 [2024-12-09 05:48:45.332584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.335 [2024-12-09 05:48:45.332598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.335 [2024-12-09 05:48:45.332610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.335 [2024-12-09 05:48:45.344812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.335 [2024-12-09 05:48:45.345220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.335 [2024-12-09 05:48:45.345249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.335 [2024-12-09 05:48:45.345266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.335 [2024-12-09 05:48:45.345556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.335 [2024-12-09 05:48:45.345763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.335 [2024-12-09 05:48:45.345783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.335 [2024-12-09 05:48:45.345796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.335 [2024-12-09 05:48:45.345808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.335 [2024-12-09 05:48:45.357909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.335 [2024-12-09 05:48:45.358315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.335 [2024-12-09 05:48:45.358344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.335 [2024-12-09 05:48:45.358360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.335 [2024-12-09 05:48:45.358596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.335 [2024-12-09 05:48:45.358784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.358815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.358828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.358840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.371068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.371418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.371447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.371463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.371695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.371898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.371918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.371932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.371944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.384167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.384595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.384625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.384642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.384878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.385083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.385108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.385121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.385134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.397354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.397732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.397760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.397777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.397995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.398199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.398219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.398232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.398244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.410475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.410882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.410911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.410927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.411162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.411414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.411436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.411449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.411462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.423738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.424120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.424150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.424167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.424423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.424674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.424696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.424710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.424727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.437103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.437491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.437522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.437538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.437781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.437991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.438010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.438023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.438036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.450527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.450913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.450942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.450959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.451197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.451435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.451457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.451470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.451482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.463778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.464186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.464212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.464227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.464495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.464722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.464742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.464754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.464766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.476945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.477361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.477393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.477410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.477656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.477862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.477882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.477894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.477907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.490111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.490526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.490554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.490571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.490802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.491006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.491026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.491039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.491051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.503390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.503769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.503796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.503812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.504022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.504226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.504246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.504284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.504300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.516482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.516826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.516855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.516871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.517112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.517360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.517382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.517396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.517408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.529524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.529866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.529882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.530099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.530349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.336 [2024-12-09 05:48:45.530371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.336 [2024-12-09 05:48:45.530384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.336 [2024-12-09 05:48:45.530397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.336 [2024-12-09 05:48:45.542598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.336 [2024-12-09 05:48:45.542941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.336 [2024-12-09 05:48:45.542968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.336 [2024-12-09 05:48:45.542984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.336 [2024-12-09 05:48:45.543213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.336 [2024-12-09 05:48:45.543449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.337 [2024-12-09 05:48:45.543471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.337 [2024-12-09 05:48:45.543484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.337 [2024-12-09 05:48:45.543496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.337 [2024-12-09 05:48:45.555684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.337 [2024-12-09 05:48:45.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.337 [2024-12-09 05:48:45.556059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.337 [2024-12-09 05:48:45.556075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.337 [2024-12-09 05:48:45.556317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.337 [2024-12-09 05:48:45.556571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.337 [2024-12-09 05:48:45.556599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.337 [2024-12-09 05:48:45.556627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.337 [2024-12-09 05:48:45.556640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.568721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.569065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.569093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.569110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.569379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.569612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.569633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.569646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.569672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.581940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.582291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.582336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.582353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.582594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.582798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.582818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.582830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.582842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.595149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.595531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.595559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.595575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.595809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.596014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.596033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.596045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.596061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.608383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.608700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.608742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.608758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.608975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.609180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.609199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.609211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.609223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.621608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.622013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.622041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.622058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.622306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.622520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.622541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.622554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.622566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.594 [2024-12-09 05:48:45.634698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.594 [2024-12-09 05:48:45.635104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.594 [2024-12-09 05:48:45.635132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.594 [2024-12-09 05:48:45.635149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.594 [2024-12-09 05:48:45.635403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.594 [2024-12-09 05:48:45.635639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.594 [2024-12-09 05:48:45.635674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.594 [2024-12-09 05:48:45.635686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.594 [2024-12-09 05:48:45.635698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.647739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.648023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.648064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.648080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.648301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.648516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.648537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.648550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.648576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.660940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.661284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.661312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.661328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.661560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.661765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.661784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.661796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.661808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.673971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.674313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.674357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.674373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.674609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.674830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.674850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.674862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.674874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.687172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.687579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.687606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.687636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.687859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.688063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.688083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.688095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.688106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.700303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.700721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.700749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.700765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.701000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.701204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.701223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.701236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.701248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.713434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.713765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.713810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.714027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.714233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.714252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.714265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.714299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.726846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.727224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.727251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.727266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.727508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.727738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.727762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.727775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.727786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.740189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.740551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.740598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.740837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.741041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.741060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.741072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.741084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.753410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.753739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.753765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.753781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.753999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.754205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.754224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.754237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.754249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.595 [2024-12-09 05:48:45.766406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.595 [2024-12-09 05:48:45.766810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.595 [2024-12-09 05:48:45.766838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.595 [2024-12-09 05:48:45.766855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.595 [2024-12-09 05:48:45.767092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.595 [2024-12-09 05:48:45.767337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.595 [2024-12-09 05:48:45.767358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.595 [2024-12-09 05:48:45.767371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.595 [2024-12-09 05:48:45.767388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.596 [2024-12-09 05:48:45.779418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.596 [2024-12-09 05:48:45.779822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.596 [2024-12-09 05:48:45.779849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.596 [2024-12-09 05:48:45.779865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.596 [2024-12-09 05:48:45.780096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.596 [2024-12-09 05:48:45.780342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.596 [2024-12-09 05:48:45.780363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.596 [2024-12-09 05:48:45.780376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.596 [2024-12-09 05:48:45.780388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.596 [2024-12-09 05:48:45.792552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.596 [2024-12-09 05:48:45.792897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.596 [2024-12-09 05:48:45.792924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.596 [2024-12-09 05:48:45.792940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.596 [2024-12-09 05:48:45.793170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.596 [2024-12-09 05:48:45.793419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.596 [2024-12-09 05:48:45.793440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.596 [2024-12-09 05:48:45.793454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.596 [2024-12-09 05:48:45.793467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.596 [2024-12-09 05:48:45.805611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.596 [2024-12-09 05:48:45.806025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.596 [2024-12-09 05:48:45.806053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.596 [2024-12-09 05:48:45.806068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.596 [2024-12-09 05:48:45.806311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.596 [2024-12-09 05:48:45.806505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.596 [2024-12-09 05:48:45.806525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.596 [2024-12-09 05:48:45.806538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.596 [2024-12-09 05:48:45.806550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.854 4465.40 IOPS, 17.44 MiB/s [2024-12-09T04:48:46.079Z] [2024-12-09 05:48:45.819211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.854 [2024-12-09 05:48:45.819595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.854 [2024-12-09 05:48:45.819638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.854 [2024-12-09 05:48:45.819655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.854 [2024-12-09 05:48:45.819890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.854 [2024-12-09 05:48:45.820078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.854 [2024-12-09 05:48:45.820097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.854 [2024-12-09 05:48:45.820110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.854 [2024-12-09 05:48:45.820122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.854 [2024-12-09 05:48:45.832301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.854 [2024-12-09 05:48:45.832643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.854 [2024-12-09 05:48:45.832671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.854 [2024-12-09 05:48:45.832687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.854 [2024-12-09 05:48:45.832917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.854 [2024-12-09 05:48:45.833122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.854 [2024-12-09 05:48:45.833141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.854 [2024-12-09 05:48:45.833153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.854 [2024-12-09 05:48:45.833165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.854 [2024-12-09 05:48:45.845314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.854 [2024-12-09 05:48:45.845684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.854 [2024-12-09 05:48:45.845712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.854 [2024-12-09 05:48:45.845728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.854 [2024-12-09 05:48:45.845946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.854 [2024-12-09 05:48:45.846150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.854 [2024-12-09 05:48:45.846169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.854 [2024-12-09 05:48:45.846181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.854 [2024-12-09 05:48:45.846193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.854 [2024-12-09 05:48:45.858371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.854 [2024-12-09 05:48:45.858714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.854 [2024-12-09 05:48:45.858742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.854 [2024-12-09 05:48:45.858763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.854 [2024-12-09 05:48:45.858998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.854 [2024-12-09 05:48:45.859204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.854 [2024-12-09 05:48:45.859223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.854 [2024-12-09 05:48:45.859235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.854 [2024-12-09 05:48:45.859247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.854 [2024-12-09 05:48:45.871488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.854 [2024-12-09 05:48:45.871831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.854 [2024-12-09 05:48:45.871859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.854 [2024-12-09 05:48:45.871875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.854 [2024-12-09 05:48:45.872112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.854 [2024-12-09 05:48:45.872341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.872362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.872374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.872386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.884558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.884945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.884961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.885197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.885433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.885454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.885467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.885480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.897619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.897939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.897966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.897983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.898185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.898437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.898459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.898471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.898484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.910621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.910928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.910955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.910971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.911167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.911416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.911437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.911450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.911462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.923642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.923954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.923982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.923998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.924215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.924453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.924474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.924487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.924499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.936833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.937187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.937216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.937233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.937474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.937704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.937725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.937738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.937754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.950053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.950393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.950420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.950436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.950652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.950873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.950892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.950905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.950917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.963125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.963499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.963526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.963542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.963760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.963966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.963985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.963997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.964009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.976188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.976686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.976714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.976730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.976977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.977181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.977200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.977213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.977224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:45.989288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:45.989620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:45.989687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:45.989703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:45.989916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:45.990119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:45.990138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:45.990151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:45.990162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:46.002732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:46.003100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:46.003153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:46.003187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:46.003432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:46.003660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:46.003680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:46.003693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:46.003705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:46.016173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:46.016520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.855 [2024-12-09 05:48:46.016572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.855 [2024-12-09 05:48:46.016590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.855 [2024-12-09 05:48:46.016854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.855 [2024-12-09 05:48:46.017088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.855 [2024-12-09 05:48:46.017109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.855 [2024-12-09 05:48:46.017123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.855 [2024-12-09 05:48:46.017136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.855 [2024-12-09 05:48:46.029969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.855 [2024-12-09 05:48:46.030306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.856 [2024-12-09 05:48:46.030337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.856 [2024-12-09 05:48:46.030359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.856 [2024-12-09 05:48:46.030577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.856 [2024-12-09 05:48:46.030812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.856 [2024-12-09 05:48:46.030833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.856 [2024-12-09 05:48:46.030846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.856 [2024-12-09 05:48:46.030858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.856 [2024-12-09 05:48:46.043751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.856 [2024-12-09 05:48:46.044083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.856 [2024-12-09 05:48:46.044121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.856 [2024-12-09 05:48:46.044154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.856 [2024-12-09 05:48:46.044396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.856 [2024-12-09 05:48:46.044631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.856 [2024-12-09 05:48:46.044668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.856 [2024-12-09 05:48:46.044681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.856 [2024-12-09 05:48:46.044694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.856 [2024-12-09 05:48:46.057389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.856 [2024-12-09 05:48:46.057818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.856 [2024-12-09 05:48:46.057848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.856 [2024-12-09 05:48:46.057864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.856 [2024-12-09 05:48:46.058091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.856 [2024-12-09 05:48:46.058346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.856 [2024-12-09 05:48:46.058370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.856 [2024-12-09 05:48:46.058385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.856 [2024-12-09 05:48:46.058399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:51.856 [2024-12-09 05:48:46.070979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:51.856 [2024-12-09 05:48:46.071342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:51.856 [2024-12-09 05:48:46.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:51.856 [2024-12-09 05:48:46.071389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:51.856 [2024-12-09 05:48:46.071621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:51.856 [2024-12-09 05:48:46.071857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:51.856 [2024-12-09 05:48:46.071878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:51.856 [2024-12-09 05:48:46.071892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:51.856 [2024-12-09 05:48:46.071904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.084629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.085087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.085139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.085156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.085415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.085650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.085670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.085683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.085695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.098008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.098370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.098400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.098417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.098659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.098864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.098883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.098895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.098907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.111310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.111686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.111725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.111759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.111989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.112178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.112198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.112210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.112226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.124696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.125052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.125099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.125361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.125594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.125614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.125626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.125638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.137869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.138283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.138327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.138343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.138580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.138786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.138805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.138818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.138830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.151034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.151404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.151434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.151451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.151675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.151879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.151899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.151911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.151923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.164290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.164667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.164694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.164709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.164905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.165125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.165144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.165157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.165169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.177381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.177725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.177753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.177769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.177998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.178203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.178222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.178235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.178246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.190507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.190826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.190853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.190869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.191064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.191326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.191347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.191360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.191373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.203510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.203916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.203943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.203963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.204196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.114 [2024-12-09 05:48:46.204432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.114 [2024-12-09 05:48:46.204454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.114 [2024-12-09 05:48:46.204467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.114 [2024-12-09 05:48:46.204479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.114 [2024-12-09 05:48:46.216748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.114 [2024-12-09 05:48:46.217085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.114 [2024-12-09 05:48:46.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.114 [2024-12-09 05:48:46.217132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.114 [2024-12-09 05:48:46.217385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.217637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.217657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.217670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.217682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.230156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.230544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.230574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.230591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.230831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.231062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.231084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.231097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.231110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.243481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.243914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.243943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.243959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.244195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.244440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.244461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.244475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.244487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.256691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.257071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.257099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.257116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.257367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.257583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.257603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.257616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.257628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.269908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.270351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.270381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.270398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.270630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.270834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.270853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.270866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.270878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.283479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.283866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.283914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.283931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.284167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.284419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.284442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.284457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.284476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.296863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.297213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.297241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.297281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.297508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.297742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.297762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.297775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.297787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.310166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.310563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.310593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.310610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.310852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.311047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.311067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.311080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.311092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.323414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.323758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.323785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.323802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.115 [2024-12-09 05:48:46.324032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.115 [2024-12-09 05:48:46.324236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.115 [2024-12-09 05:48:46.324256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.115 [2024-12-09 05:48:46.324268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.115 [2024-12-09 05:48:46.324311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.115 [2024-12-09 05:48:46.337054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.115 [2024-12-09 05:48:46.337408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.115 [2024-12-09 05:48:46.337439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.115 [2024-12-09 05:48:46.337457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.374 [2024-12-09 05:48:46.337689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.374 [2024-12-09 05:48:46.337937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.374 [2024-12-09 05:48:46.337958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.374 [2024-12-09 05:48:46.337971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.374 [2024-12-09 05:48:46.337983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.350371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.350856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.350907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.350924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.351171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.351413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.351435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.351450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.351463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.363680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.364027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.364056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.364072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.364322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.364545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.364566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.364579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.364592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.376840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.377305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.377350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.377371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.377611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.377800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.377819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.377832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.377843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.389931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.390319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.390384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.390400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.390624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.390812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.390831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.390844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.390856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.402937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.403282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.403325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.403343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.403578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.403782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.403801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.403813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.403825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.416031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.416402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.416431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.416447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.416673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.416877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.416900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.416914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.416926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.429082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.429498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.429527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.429544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.429780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.429984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.430004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.430016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.430028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 [2024-12-09 05:48:46.442267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.442628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.442656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.442673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.442926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.443121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.443140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.443154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.443166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 759258 Killed "${NVMF_APP[@]}" "$@" 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=760221 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 760221 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 760221 ']' 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:52.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:52.375 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.375 [2024-12-09 05:48:46.455753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.375 [2024-12-09 05:48:46.456173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.375 [2024-12-09 05:48:46.456202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.375 [2024-12-09 05:48:46.456219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.375 [2024-12-09 05:48:46.456480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.375 [2024-12-09 05:48:46.456714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.375 [2024-12-09 05:48:46.456734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.375 [2024-12-09 05:48:46.456747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.375 [2024-12-09 05:48:46.456761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.469220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.469579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.469609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.469627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.469858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.470074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.470094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.470107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.470119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.482490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.482902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.482942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.482958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.483181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.483437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.483459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.483482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.483496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.495708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.496014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.496055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.496072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.496304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.496522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.496542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.496558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.496571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.499202] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:53:52.376 [2024-12-09 05:48:46.499299] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:52.376 [2024-12-09 05:48:46.508971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.509344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.509382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.509399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.509622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.509816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.509835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.509849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.509861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.522327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.522765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.522794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.522818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.523058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.523269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.523318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.523333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.523346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.535657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.536055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.536084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.536102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.536339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.536546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.536577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.536608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.536622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.549028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.549419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.549467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.549719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.549921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.549943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.549957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.549970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.562458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.562904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.562934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.562951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.563197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.563449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.563472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.563487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.563500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.575811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:52.376 [2024-12-09 05:48:46.575817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.576186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.576216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.576233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.576480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.576719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.376 [2024-12-09 05:48:46.576740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.376 [2024-12-09 05:48:46.576754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.376 [2024-12-09 05:48:46.576766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.376 [2024-12-09 05:48:46.589213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.376 [2024-12-09 05:48:46.589867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.376 [2024-12-09 05:48:46.589911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.376 [2024-12-09 05:48:46.589932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.376 [2024-12-09 05:48:46.590189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.376 [2024-12-09 05:48:46.590423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.377 [2024-12-09 05:48:46.590447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.377 [2024-12-09 05:48:46.590463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.377 [2024-12-09 05:48:46.590480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.602831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.603290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.603324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.603342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.603564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.603800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.603839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.603855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.603886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.616227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.616639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.616671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.616689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.616934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.617155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.617178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.617192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.617206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.629633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.630091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.630108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.630388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.630617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.630640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.630656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.630670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.637645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:52.656 [2024-12-09 05:48:46.637682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:52.656 [2024-12-09 05:48:46.637696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:52.656 [2024-12-09 05:48:46.637707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:52.656 [2024-12-09 05:48:46.637717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:52.656 [2024-12-09 05:48:46.639270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:52.656 [2024-12-09 05:48:46.639204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:52.656 [2024-12-09 05:48:46.639266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:52.656 [2024-12-09 05:48:46.643111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.643533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.643569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.643588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.643840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.644050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.644081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.644099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.644115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.656806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.657378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.657424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.657445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.657686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.657903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.657925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.657942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.657960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.670487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.671053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.671099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.671120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.671356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.671595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.671618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.671635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.671651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.684177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.684807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.684854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.684875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.685115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.685368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.685394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.685411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.685440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.697807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.698293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.698333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.698353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.656 [2024-12-09 05:48:46.698577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.656 [2024-12-09 05:48:46.698808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.656 [2024-12-09 05:48:46.698832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.656 [2024-12-09 05:48:46.698848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.656 [2024-12-09 05:48:46.698864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.656 [2024-12-09 05:48:46.711438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.656 [2024-12-09 05:48:46.711986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.656 [2024-12-09 05:48:46.712030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.656 [2024-12-09 05:48:46.712051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.712300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.712540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.712563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.712581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.712599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 [2024-12-09 05:48:46.725092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.725613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.725653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.725675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.725914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.726126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.726147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.726164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.726179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 [2024-12-09 05:48:46.738687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.739087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.739137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.739156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.739383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.739636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.739658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.739673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.739686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 [2024-12-09 05:48:46.752202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.752615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.752645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.752663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.752895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.753109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.753132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.753145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.753159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:53:52.657 [2024-12-09 05:48:46.765899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.766285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:52.657 [2024-12-09 05:48:46.766332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.766350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:52.657 [2024-12-09 05:48:46.766585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.657 [2024-12-09 05:48:46.766809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.766833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.766847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.766861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 [2024-12-09 05:48:46.779480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.779934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.779951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.780181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.780441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.780464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.780480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.780494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.657 [2024-12-09 05:48:46.793002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.793376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.793407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.793424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.793440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:52.657 [2024-12-09 05:48:46.793657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.793880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.793901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.793914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.793927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.657 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.657 [2024-12-09 05:48:46.806662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.807043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.807075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.807094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.807354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.807590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.807628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.807643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.807658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 3721.17 IOPS, 14.54 MiB/s [2024-12-09T04:48:46.882Z] [2024-12-09 05:48:46.820310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.820733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.657 [2024-12-09 05:48:46.820763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.657 [2024-12-09 05:48:46.820781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.657 [2024-12-09 05:48:46.821026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.657 [2024-12-09 05:48:46.821245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.657 [2024-12-09 05:48:46.821294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.657 [2024-12-09 05:48:46.821319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.657 [2024-12-09 05:48:46.821333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.657 [2024-12-09 05:48:46.833844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.657 [2024-12-09 05:48:46.834431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.658 [2024-12-09 05:48:46.834474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.658 [2024-12-09 05:48:46.834494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.658 [2024-12-09 05:48:46.834749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.658 [2024-12-09 05:48:46.834960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.658 [2024-12-09 05:48:46.834982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.658 [2024-12-09 05:48:46.834999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.658 [2024-12-09 05:48:46.835016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.658 Malloc0 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.658 [2024-12-09 05:48:46.847612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.658 [2024-12-09 05:48:46.848069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:52.658 [2024-12-09 05:48:46.848100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2472a50 with addr=10.0.0.2, port=4420 00:53:52.658 [2024-12-09 05:48:46.848117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472a50 is same with the state(6) to be set 00:53:52.658 [2024-12-09 05:48:46.848350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2472a50 (9): Bad file descriptor 00:53:52.658 [2024-12-09 05:48:46.848587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:53:52.658 [2024-12-09 05:48:46.848610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:53:52.658 [2024-12-09 05:48:46.848641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:53:52.658 [2024-12-09 05:48:46.848655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:53:52.658 [2024-12-09 05:48:46.857923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:52.658 [2024-12-09 05:48:46.861329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:52.658 05:48:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 759545 00:53:52.916 [2024-12-09 05:48:46.929076] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:53:54.779 4238.14 IOPS, 16.56 MiB/s [2024-12-09T04:48:49.933Z] 4762.12 IOPS, 18.60 MiB/s [2024-12-09T04:48:50.862Z] 5189.89 IOPS, 20.27 MiB/s [2024-12-09T04:48:52.230Z] 5509.40 IOPS, 21.52 MiB/s [2024-12-09T04:48:53.160Z] 5786.45 IOPS, 22.60 MiB/s [2024-12-09T04:48:54.093Z] 6010.67 IOPS, 23.48 MiB/s [2024-12-09T04:48:55.027Z] 6202.69 IOPS, 24.23 MiB/s [2024-12-09T04:48:55.959Z] 6374.57 IOPS, 24.90 MiB/s 00:54:01.734 Latency(us) 00:54:01.734 [2024-12-09T04:48:55.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:01.734 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:54:01.734 Verification LBA range: start 0x0 length 0x4000 00:54:01.734 Nvme1n1 : 15.00 6514.99 25.45 10131.99 0.00 7666.34 682.67 24563.86 00:54:01.734 [2024-12-09T04:48:55.959Z] =================================================================================================================== 00:54:01.734 [2024-12-09T04:48:55.959Z] Total : 6514.99 25.45 10131.99 0.00 7666.34 682.67 24563.86 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:01.992 rmmod nvme_tcp 00:54:01.992 rmmod nvme_fabrics 00:54:01.992 rmmod nvme_keyring 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 760221 ']' 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 760221 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 760221 ']' 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 760221 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 760221 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 760221' 00:54:01.992 killing process with pid 760221 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 760221 00:54:01.992 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 760221 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:02.250 05:48:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:54:04.783 00:54:04.783 real 0m22.741s 00:54:04.783 user 1m0.576s 00:54:04.783 sys 0m4.242s 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:54:04.783 ************************************ 00:54:04.783 END TEST nvmf_bdevperf 00:54:04.783 ************************************ 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:04.783 ************************************ 00:54:04.783 START TEST nvmf_target_disconnect 00:54:04.783 ************************************ 00:54:04.783 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:54:04.783 * Looking for test storage... 00:54:04.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:54:04.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.784 --rc genhtml_branch_coverage=1 00:54:04.784 --rc genhtml_function_coverage=1 00:54:04.784 --rc genhtml_legend=1 00:54:04.784 --rc geninfo_all_blocks=1 00:54:04.784 --rc geninfo_unexecuted_blocks=1 00:54:04.784 00:54:04.784 ' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:54:04.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.784 --rc genhtml_branch_coverage=1 00:54:04.784 --rc genhtml_function_coverage=1 00:54:04.784 --rc genhtml_legend=1 00:54:04.784 --rc geninfo_all_blocks=1 00:54:04.784 --rc geninfo_unexecuted_blocks=1 00:54:04.784 00:54:04.784 ' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:54:04.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.784 --rc genhtml_branch_coverage=1 00:54:04.784 --rc genhtml_function_coverage=1 00:54:04.784 --rc genhtml_legend=1 00:54:04.784 --rc geninfo_all_blocks=1 00:54:04.784 --rc geninfo_unexecuted_blocks=1 00:54:04.784 00:54:04.784 ' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:54:04.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.784 --rc genhtml_branch_coverage=1 00:54:04.784 --rc genhtml_function_coverage=1 00:54:04.784 --rc genhtml_legend=1 00:54:04.784 --rc geninfo_all_blocks=1 00:54:04.784 --rc geninfo_unexecuted_blocks=1 00:54:04.784 00:54:04.784 ' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:04.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:04.784 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:54:04.785 05:48:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:54:06.718 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:54:06.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:54:06.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:54:06.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:54:06.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:54:06.976 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:54:06.977 05:49:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:54:06.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:06.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:54:06.977 00:54:06.977 --- 10.0.0.2 ping statistics --- 00:54:06.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:06.977 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:54:06.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:06.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:54:06.977 00:54:06.977 --- 10.0.0.1 ping statistics --- 00:54:06.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:06.977 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:54:06.977 ************************************ 00:54:06.977 START TEST nvmf_target_disconnect_tc1 00:54:06.977 ************************************ 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:54:06.977 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:54:07.234 [2024-12-09 05:49:01.251821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:07.234 [2024-12-09 05:49:01.251896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcbf40 with addr=10.0.0.2, port=4420 00:54:07.234 [2024-12-09 05:49:01.251932] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:54:07.234 [2024-12-09 05:49:01.251959] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:54:07.234 [2024-12-09 05:49:01.251974] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:54:07.234 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:54:07.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:54:07.234 Initializing NVMe Controllers 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:07.234 00:54:07.234 real 0m0.136s 00:54:07.234 user 0m0.084s 00:54:07.234 sys 0m0.052s 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:54:07.234 ************************************ 00:54:07.234 END TEST nvmf_target_disconnect_tc1 00:54:07.234 ************************************ 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:54:07.234 ************************************ 00:54:07.234 START TEST nvmf_target_disconnect_tc2 00:54:07.234 ************************************ 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=763391 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 763391 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 763391 ']' 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:07.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:07.234 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.234 [2024-12-09 05:49:01.408690] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:54:07.234 [2024-12-09 05:49:01.408780] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:07.490 [2024-12-09 05:49:01.482399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:07.490 [2024-12-09 05:49:01.540716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:07.490 [2024-12-09 05:49:01.540769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:07.490 [2024-12-09 05:49:01.540806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:07.490 [2024-12-09 05:49:01.540818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:07.490 [2024-12-09 05:49:01.540828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:07.490 [2024-12-09 05:49:01.542325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:54:07.490 [2024-12-09 05:49:01.542391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:54:07.490 [2024-12-09 05:49:01.542454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:54:07.490 [2024-12-09 05:49:01.542458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.490 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 Malloc0 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 [2024-12-09 05:49:01.739566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 [2024-12-09 05:49:01.767880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=763413 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:54:07.747 05:49:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:54:09.651 05:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 763391 00:54:09.652 05:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 [2024-12-09 05:49:03.793564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Read completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 Write completed with error (sct=0, sc=8) 00:54:09.652 starting I/O failed 00:54:09.652 [2024-12-09 05:49:03.793936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:54:09.652 [2024-12-09 05:49:03.794148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.794301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.794446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.794739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.794888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.794914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.652 [2024-12-09 05:49:03.795816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.652 [2024-12-09 05:49:03.795842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.652 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.795991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.796816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.796842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.797019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.797046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.797129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.797154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.797264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.797297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.797403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.797447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Write completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 Read completed with error (sct=0, sc=8) 00:54:09.653 starting I/O failed 00:54:09.653 [2024-12-09 05:49:03.797807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:54:09.653 [2024-12-09 05:49:03.797991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.798932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.798958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.799878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.799913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.800143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.653 qpair failed and we were unable to recover it. 00:54:09.653 [2024-12-09 05:49:03.800305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.653 [2024-12-09 05:49:03.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.800417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.800444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.800537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.800570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.800683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.800710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.800819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.800845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.800980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.801900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.802893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.802918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.803920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.803952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.804879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.804919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.654 qpair failed and we were unable to recover it. 00:54:09.654 [2024-12-09 05:49:03.805728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.654 [2024-12-09 05:49:03.805754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.805877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.805905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.806853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.806881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.807965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.807991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.808921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.808949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.809927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.809955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.810920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.655 [2024-12-09 05:49:03.810946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.655 qpair failed and we were unable to recover it. 00:54:09.655 [2024-12-09 05:49:03.811066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.811227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.811407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.811586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.811729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.811870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.811897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.812919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.812945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.813894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.813922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.814933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.814960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.815100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 [2024-12-09 05:49:03.815216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.656 [2024-12-09 05:49:03.815246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.656 qpair failed and we were unable to recover it. 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Read completed with error (sct=0, sc=8) 00:54:09.656 starting I/O failed 00:54:09.656 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Read completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 Write completed with error (sct=0, sc=8) 00:54:09.657 starting I/O failed 00:54:09.657 [2024-12-09 05:49:03.815563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:54:09.657 [2024-12-09 05:49:03.815655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.815683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.815823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.815850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.815941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.815967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.816863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.816889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.817955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.817982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.818880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.818994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.657 qpair failed and we were unable to recover it. 00:54:09.657 [2024-12-09 05:49:03.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.657 [2024-12-09 05:49:03.819899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.819984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.820914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.820991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.821875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.821990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.822880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.822906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.823868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.823896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.824858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.824886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.658 [2024-12-09 05:49:03.825012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.658 [2024-12-09 05:49:03.825039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.658 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.825849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.825876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.826913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.826941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.827871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.827991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.828917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.828947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.829960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.829987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.830125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.830152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.830265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.830297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.659 [2024-12-09 05:49:03.830411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.659 [2024-12-09 05:49:03.830439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.659 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.830550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.830697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.830724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.830877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.830918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.831952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.831992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.832937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.832964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.833965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.834924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.834950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.660 qpair failed and we were unable to recover it. 00:54:09.660 [2024-12-09 05:49:03.835756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.660 [2024-12-09 05:49:03.835782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.835897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.835924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.836888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.836915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.837879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.837905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.838921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.839870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.839896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.840126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.840182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.661 [2024-12-09 05:49:03.840308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.661 [2024-12-09 05:49:03.840348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.661 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.840447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.840476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.840563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.840871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.841935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.842879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.842906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.843977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.844874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.844903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.662 qpair failed and we were unable to recover it. 00:54:09.662 [2024-12-09 05:49:03.845817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.662 [2024-12-09 05:49:03.845845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.845968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.845994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.846918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.847873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.847985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.848868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.848894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.849884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.849912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.850946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.850971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.851067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.851108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.851217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.851257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.663 [2024-12-09 05:49:03.851394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.663 [2024-12-09 05:49:03.851422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.663 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.851567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.851731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.851758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.851850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.851878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.852156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.852304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.852470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.852580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.852851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.852971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.853903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.853930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.854867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.854977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.855947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.855973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.856941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.856966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.857063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.857091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.664 qpair failed and we were unable to recover it. 00:54:09.664 [2024-12-09 05:49:03.857233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.664 [2024-12-09 05:49:03.857260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.857370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.857525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.857575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.857739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.857974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.859889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.859916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.860891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.860917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.861854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.861974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.862001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.862105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.862131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.862223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.665 [2024-12-09 05:49:03.862248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.665 qpair failed and we were unable to recover it. 00:54:09.665 [2024-12-09 05:49:03.862375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.862401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.862483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.862510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.862601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.862769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.862885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.863866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.863894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.864896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.864929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.865953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.865980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.866904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.866930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.867073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.867187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.867334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.867457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.666 [2024-12-09 05:49:03.867606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.666 qpair failed and we were unable to recover it. 00:54:09.666 [2024-12-09 05:49:03.867716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.867743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.867888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.867915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.868005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.868034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.868121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.868147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.868245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.868279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.868404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.667 qpair failed and we were unable to recover it. 00:54:09.667 [2024-12-09 05:49:03.868490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.667 [2024-12-09 05:49:03.868518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.868616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.868645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.868735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.868761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.868988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.869906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.869994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.951 [2024-12-09 05:49:03.870814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.951 qpair failed and we were unable to recover it. 00:54:09.951 [2024-12-09 05:49:03.870904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.870932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.871925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.871952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.872854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.872881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.873915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.873949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.874931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.874958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.952 [2024-12-09 05:49:03.875702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.952 qpair failed and we were unable to recover it. 00:54:09.952 [2024-12-09 05:49:03.875852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.875878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.875963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.875992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.876971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.876997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.877934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.877961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.878926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.879869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.879897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.880799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.880853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.881032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.881091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.953 [2024-12-09 05:49:03.881196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.953 [2024-12-09 05:49:03.881223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.953 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.881311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.881340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.881491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.881517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.881635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.881662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.881776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.881803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.881943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.881968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.882893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.882947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.883861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.883978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.884931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.884960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.885885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.885912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.886021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.886048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.886126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.886154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.886262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.886297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.886405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.954 [2024-12-09 05:49:03.886498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.954 [2024-12-09 05:49:03.886525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.954 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.886610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.886638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.886757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.886784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.886876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.886904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.887963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.887990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.888162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.888342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.888514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.888823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.888996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.889988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.890898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.890924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.891043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.891074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.891161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.891188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.891289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.891330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.891423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.891451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.955 [2024-12-09 05:49:03.891549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.955 [2024-12-09 05:49:03.891576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.955 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.891714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.891741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.891820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.891847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.891941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.891970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.892853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.892922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.893878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.893907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.894918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.894944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.956 [2024-12-09 05:49:03.895731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.956 qpair failed and we were unable to recover it. 00:54:09.956 [2024-12-09 05:49:03.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.895878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.895957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.895991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.896984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.897967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.898958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.898985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.899755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.899783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.900951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.957 [2024-12-09 05:49:03.900992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.957 qpair failed and we were unable to recover it. 00:54:09.957 [2024-12-09 05:49:03.901082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.901913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.901939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.902964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.902990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.903974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.904894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.905903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.905932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.906051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.906173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.958 [2024-12-09 05:49:03.906199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.958 qpair failed and we were unable to recover it. 00:54:09.958 [2024-12-09 05:49:03.906282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.906398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.906677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.906815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.906937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.906964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.907959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.908930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.908970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.909907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.909947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.910913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.910939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.911048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.911075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.911187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.911216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.911346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.911387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.911508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.911536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.959 qpair failed and we were unable to recover it. 00:54:09.959 [2024-12-09 05:49:03.911643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.959 [2024-12-09 05:49:03.911670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.911808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.911835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.911947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.911973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.912954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.912980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.913951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.913976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.914924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.914963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.915896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.916072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.916098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.916180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.916205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.916306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.960 [2024-12-09 05:49:03.916340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.960 qpair failed and we were unable to recover it. 00:54:09.960 [2024-12-09 05:49:03.916460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.916486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.916598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.916717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.916746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.916888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.917849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.917992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.918972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.918998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.919966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.919998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.920919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.920945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.921021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.921047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.921160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.921186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.921303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.921329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.921433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.961 qpair failed and we were unable to recover it. 00:54:09.961 [2024-12-09 05:49:03.921544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.961 [2024-12-09 05:49:03.921571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.921689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.921715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.921801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.921827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.921905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.921932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.922886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.922994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.923962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.923988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.924943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.924995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.925966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.925993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.926137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.926163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.926294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.926335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.926475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.926514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.926639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.926667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.926832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.926882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.962 qpair failed and we were unable to recover it. 00:54:09.962 [2024-12-09 05:49:03.927040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.962 [2024-12-09 05:49:03.927107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.927200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.927228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.927349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.927378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.927486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.927512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.927666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.927902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.927965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.928960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.928987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.929820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.929988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.930944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.930970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.931918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.931947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.932035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.932061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.932173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.932200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.932314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.932341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.932459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.963 [2024-12-09 05:49:03.932485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.963 qpair failed and we were unable to recover it. 00:54:09.963 [2024-12-09 05:49:03.932570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.932598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.932777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.932887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.932913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.933902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.934128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.934245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.934426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.934549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.934752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.934980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.935972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.935999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.936962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.936988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.937961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.937987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.964 qpair failed and we were unable to recover it. 00:54:09.964 [2024-12-09 05:49:03.938081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.964 [2024-12-09 05:49:03.938120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.938218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.938247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.938349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.938380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.938464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.938492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.938650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.938708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.938879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.938930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.939842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.939973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.940835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.940862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.941901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.941955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.942972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.942999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.943118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.943147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.965 [2024-12-09 05:49:03.943240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.965 [2024-12-09 05:49:03.943289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.965 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.943411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.943440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.943556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.943590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.943785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.943814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.943953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.943980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.944944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.944995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.945958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.945984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe24f30 is same with the state(6) to be set 00:54:09.966 [2024-12-09 05:49:03.946589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.946961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.946988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.947113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.947154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.947313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.947353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.947473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.947502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.947719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.947772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.947876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.947943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.966 qpair failed and we were unable to recover it. 00:54:09.966 [2024-12-09 05:49:03.948943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.966 [2024-12-09 05:49:03.948969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.949887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.949979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.950917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.950943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.951968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.951994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.952870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.952986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.953918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.953943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.967 qpair failed and we were unable to recover it. 00:54:09.967 [2024-12-09 05:49:03.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.967 [2024-12-09 05:49:03.954083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.954878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.955811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.955840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.956838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.956865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.957858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.957884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.958856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.959001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.959028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.959139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.959165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.959277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.959319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.959433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.959474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.968 [2024-12-09 05:49:03.959571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.968 [2024-12-09 05:49:03.959599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.968 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.959707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.959734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.959850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.959903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.960927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.961860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.961886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.962868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.962982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.963955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.963995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.964125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.964257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.964293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.969 qpair failed and we were unable to recover it. 00:54:09.969 [2024-12-09 05:49:03.964417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.969 [2024-12-09 05:49:03.964444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.964533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.964565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.964655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.964682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.964831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.964885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.965941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.965994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.966945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.966972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.967965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.967992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.968942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.968968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.969920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.969947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.970 qpair failed and we were unable to recover it. 00:54:09.970 [2024-12-09 05:49:03.970071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.970 [2024-12-09 05:49:03.970097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.970219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.970245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.970381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.970421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.970528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.970569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.970703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.970754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.970908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.971899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.971925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.972933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.972986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.973871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.973897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.974950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.974977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.975077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.975119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.975267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.975302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.975385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.975413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.971 [2024-12-09 05:49:03.975552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.971 [2024-12-09 05:49:03.975579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.971 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.975722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.975864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.975890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.976855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.976883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.977915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.977942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.978867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.978976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.979874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.979980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.980959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.980987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.972 [2024-12-09 05:49:03.981107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.972 [2024-12-09 05:49:03.981136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.972 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.981944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.981971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.982945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.982971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.983960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.983987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.984189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.984217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.984337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.984365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.984558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.984584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.984702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.984728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.984870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.984930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.985866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.985892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.973 [2024-12-09 05:49:03.986765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.973 [2024-12-09 05:49:03.986792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.973 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.986877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.986905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.987883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.987911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.988997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.989898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.989924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.990922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.990947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.991065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.991172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.991370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.991527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.974 [2024-12-09 05:49:03.991679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.974 qpair failed and we were unable to recover it. 00:54:09.974 [2024-12-09 05:49:03.991788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.991814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.991955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.991982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.992887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.992914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.993871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.993922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.994914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.994941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.995894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.995921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.996860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.996886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.997015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.997066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.997181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.975 [2024-12-09 05:49:03.997215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.975 qpair failed and we were unable to recover it. 00:54:09.975 [2024-12-09 05:49:03.997339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.997367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.997458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.997484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.997605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.997643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.997764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.997790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.997934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.997960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.998863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.998902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:03.999925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:03.999991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.000887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.000987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.001922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.001988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.002100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.002218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.002375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.002547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.976 [2024-12-09 05:49:04.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.976 qpair failed and we were unable to recover it. 00:54:09.976 [2024-12-09 05:49:04.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.002815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.002926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.002954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.003860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.003887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.004872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.004899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.005881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.005915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.006888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.006915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.007955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.007983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.008106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.977 [2024-12-09 05:49:04.008135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.977 qpair failed and we were unable to recover it. 00:54:09.977 [2024-12-09 05:49:04.008245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.008281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.008399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.008425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.008516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.008547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.008691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.008832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.008858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.008997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.009946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.010867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.010894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.011927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.012911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.978 [2024-12-09 05:49:04.012938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.978 qpair failed and we were unable to recover it. 00:54:09.978 [2024-12-09 05:49:04.013028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.013838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.013983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.014857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.015936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.015963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.016874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.016914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.017030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.017058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.017174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.017203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.017336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.979 qpair failed and we were unable to recover it. 00:54:09.979 [2024-12-09 05:49:04.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.979 [2024-12-09 05:49:04.017482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.017636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.017663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.017805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.017831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.017907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.017933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.018855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.018895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.019872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.019897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.020825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.020853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.021928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.021954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.022107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.022147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.022270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.022318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.022437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.022582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.022620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.022802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.022852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.980 [2024-12-09 05:49:04.023006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.980 [2024-12-09 05:49:04.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.980 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.023858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.023884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.024943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.024969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.025882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.025950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.026869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.026931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.027202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.027323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.027460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.027608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.027728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.027948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.028004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.028118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.028144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.028230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.981 [2024-12-09 05:49:04.028256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.981 qpair failed and we were unable to recover it. 00:54:09.981 [2024-12-09 05:49:04.028379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.028407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.028497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.028524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.028617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.028644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.028812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.028866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.028976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.029723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.029953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.030905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.030932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.031880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.031908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.032884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.032911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.033892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.982 [2024-12-09 05:49:04.033918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.982 qpair failed and we were unable to recover it. 00:54:09.982 [2024-12-09 05:49:04.034006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.034939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.034966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.035971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.035998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.036826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.036852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.037730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.037953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.038005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.038088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.038114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.038224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.038251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.038401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.038429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.983 qpair failed and we were unable to recover it. 00:54:09.983 [2024-12-09 05:49:04.038517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.983 [2024-12-09 05:49:04.038543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.038648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.038674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.038790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.038952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.039886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.039927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.040853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.040974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.041921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.041947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.042900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.042927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.043055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.043094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.043215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.043243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.984 [2024-12-09 05:49:04.043345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.984 [2024-12-09 05:49:04.043372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.984 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.043483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.043509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.043599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.043625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.043739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.043767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.043885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.043914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.044915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.044940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.045962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.046888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.985 [2024-12-09 05:49:04.047789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.985 qpair failed and we were unable to recover it. 00:54:09.985 [2024-12-09 05:49:04.047927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.047953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.048933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.048959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.049912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.049988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.050888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.050914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.051910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.051936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.986 [2024-12-09 05:49:04.052938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.986 [2024-12-09 05:49:04.052963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.986 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.053884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.053913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.054860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.054974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.055973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.055999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.056915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.056992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.057877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.057902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.058033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.058150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.058177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.987 [2024-12-09 05:49:04.058297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.987 [2024-12-09 05:49:04.058324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.987 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.058464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.058494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.058617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.058642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.058751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.058777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.058888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.059950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.059975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.060847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.060992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.061878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.061903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.062925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.062950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.988 [2024-12-09 05:49:04.063030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.988 [2024-12-09 05:49:04.063054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.988 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.063170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.063195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.063284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.063320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.063437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.063632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.063837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.063890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.064860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.064976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.065911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.065972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.066848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.066911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.067219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.067442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.067577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.067838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.067982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.068926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.068952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.989 qpair failed and we were unable to recover it. 00:54:09.989 [2024-12-09 05:49:04.069032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.989 [2024-12-09 05:49:04.069056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.069918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.069945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.070918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.070943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.071927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.071952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.072882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.072909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.073868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.073930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.074113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.074228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.074268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.990 qpair failed and we were unable to recover it. 00:54:09.990 [2024-12-09 05:49:04.074473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.990 [2024-12-09 05:49:04.074501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.074621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.074648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.074729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.074761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.074844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.074870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.074987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.075876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.075943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.076878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.077105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.077224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.077373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.077509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.077707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.077758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.078904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.078990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.079864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.991 [2024-12-09 05:49:04.079892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.991 qpair failed and we were unable to recover it. 00:54:09.991 [2024-12-09 05:49:04.080002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.080112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.080269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.080450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.080617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.080841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.080868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.081886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.081957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.082857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.082881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.083093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.083163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.083256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.083288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.083404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.083429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.083634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.083707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.083863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.083940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.084902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.084988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.085968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.085996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.086084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.992 [2024-12-09 05:49:04.086111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.992 qpair failed and we were unable to recover it. 00:54:09.992 [2024-12-09 05:49:04.086232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.086259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.086383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.086411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.086530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.086672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.086697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.086819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.086845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.087913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.087940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.088086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.088200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.088418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.088566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.088812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.088972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.089843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.089897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.090889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.090953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.091096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.091123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.091269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.091303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.091386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.993 [2024-12-09 05:49:04.091411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.993 qpair failed and we were unable to recover it. 00:54:09.993 [2024-12-09 05:49:04.091519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.091544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.091654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.091679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.091762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.091787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.092882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.092907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.093756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.093783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.094887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.094914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.095785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.095932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.096845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.096983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.097010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.097152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.097179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.994 [2024-12-09 05:49:04.097289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.994 [2024-12-09 05:49:04.097316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.994 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.097441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.097466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.097556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.097581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.097781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.097847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.098855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.098919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.099971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.100912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.100979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.101310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.101443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.101554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.101801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.101977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.102269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.102438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.102663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.102803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.102949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.102975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.103867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.104020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.995 [2024-12-09 05:49:04.104173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.995 [2024-12-09 05:49:04.104200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.995 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.104313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.104346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.104447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.104611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.104639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.104755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.104812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.105801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.105962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.106170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.106314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.106460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.106601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.106739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.106765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.107063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.107119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.107302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.107447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.107475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.107706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.107758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.107924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.108066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.108095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.108238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.108263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.108384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.108623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.108677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.108890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.108946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.109857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.109883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.110923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.110952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.996 [2024-12-09 05:49:04.111076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.996 [2024-12-09 05:49:04.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.996 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.111208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.111248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.111381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.111410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.111529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.111556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.111781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.112867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.112993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.113910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.113938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.114943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.114971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.115806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.115962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.116037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.116209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.116235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.116385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.116429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.116521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.116547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.116654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.997 [2024-12-09 05:49:04.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.997 qpair failed and we were unable to recover it. 00:54:09.997 [2024-12-09 05:49:04.116767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.116793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.116905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.116932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.117079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.117108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.117229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.117258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.117416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.117442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.117529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.117555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.117854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.117918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.118863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.118930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.119879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.119956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.120870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.120985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.121012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.121121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.121147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.998 [2024-12-09 05:49:04.121243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.998 [2024-12-09 05:49:04.121270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.998 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.121367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.121393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.121515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.121541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.121746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.121821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.121985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.122864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.122891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.123875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.123900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.124952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.125112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.125309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.125419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.125555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.125586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.125732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.126112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.126138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.126290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.126317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.126406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.126480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.126812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.126877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.127073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.127138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.127377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.127404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.127675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.127874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.127941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:09.999 [2024-12-09 05:49:04.128196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:09.999 [2024-12-09 05:49:04.128261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:09.999 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.128508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.128556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.128718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.128767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.128851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.128878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.128961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.128992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.129941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.129966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.130108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.130134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.130328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.130548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.130613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.130852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.131128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.131154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.131245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.131276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.131392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.131417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.131591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.131655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.131911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.131976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.132269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.132301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.132530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.132597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.132767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.132817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.132900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.132926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.133912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.000 qpair failed and we were unable to recover it. 00:54:10.000 [2024-12-09 05:49:04.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.000 [2024-12-09 05:49:04.134907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.135051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.135077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.135218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.135374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.135441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.135691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.135756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.136012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.136077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.136212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.136238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.136319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.136391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.136705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.136769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.137006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.137067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.137178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.137203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.137332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.137416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.137486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.137729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.137794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.138026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.138091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.138268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.138306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.138417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.138444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.138639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.138704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.138956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.139210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.139241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.139336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.139362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.139471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.139691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.139764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.139996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.140064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.140355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.140382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.140540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.140565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.140755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.140820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.141063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.141128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.141367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.141449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.141476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.141584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.141609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.141768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.141833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.142112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.142185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.142414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.142441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.142549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.142574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.142771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.142836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.143079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.143144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.143371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.143398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.001 qpair failed and we were unable to recover it. 00:54:10.001 [2024-12-09 05:49:04.143645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.001 [2024-12-09 05:49:04.143709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.143892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.143959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.144910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.144936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.145079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.145118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.145206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.145232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.145381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.145445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.145590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.145867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.145931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.146150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.146208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.146376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.146442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.146683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.146747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.147042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.147106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.147287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.147327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.147527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.147555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.147772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.147827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.147940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.148899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.148926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.149044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.149069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.149173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.149205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.149351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.149380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.149497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.149522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.002 [2024-12-09 05:49:04.149609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.002 [2024-12-09 05:49:04.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.002 qpair failed and we were unable to recover it. 00:54:10.003 [2024-12-09 05:49:04.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.003 [2024-12-09 05:49:04.149879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.003 qpair failed and we were unable to recover it. 00:54:10.003 [2024-12-09 05:49:04.149984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.003 [2024-12-09 05:49:04.150016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.003 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.150906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.150932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.151071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.151214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.151243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.151389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.151456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.151803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.151862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.152091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.152144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.152217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.152243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.152369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.152396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.152592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.287 [2024-12-09 05:49:04.152656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.287 qpair failed and we were unable to recover it. 00:54:10.287 [2024-12-09 05:49:04.152846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.152910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.153201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.153264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.153435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.153499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.153760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.153963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.153989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.154147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.154174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.154326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.154353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.154551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.154616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.154848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.155115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.155181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.155448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.155514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.155839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.155903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.156129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.156159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.156255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.156285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.156461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.156528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.156740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.156792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.156966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.157140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.157284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.157492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.157688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.157920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.157982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.158137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.158164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.158320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.158397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.158488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.158515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.158733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.158797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.158956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.158983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.159129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.159155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.159229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.159254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.159455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.159527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.159728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.159995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.160022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.160135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.160171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.160322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.160349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.160537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.160604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.160894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.160959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.161208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.288 [2024-12-09 05:49:04.161396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.288 [2024-12-09 05:49:04.161421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.288 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.161621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.161686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.162005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.162068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.162314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.162342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.162525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.162592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.162815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.163083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.163345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.163372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.163509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.163536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.163765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.163831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.164132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.164159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.164280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.164305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.164419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.164444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.164616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.164682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.164935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.165216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.165246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.165341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.165401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.165624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.165688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.165971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.166035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.166203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.166229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.166352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.166377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.166499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.166523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.166752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.166815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.166987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.167246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.167281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.167502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.167566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.167845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.167909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.168195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.168260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.168584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.168648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.168884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.168948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.169134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.169200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.169458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.169525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.169738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.169804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.170093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.170157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.170444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.170509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.170761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.289 [2024-12-09 05:49:04.170825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.289 qpair failed and we were unable to recover it. 00:54:10.289 [2024-12-09 05:49:04.171032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.171097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.171352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.171419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.171658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.171722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.171972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.172037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.172344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.172411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.172707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.172780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.173039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.173105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.173346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.173412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.173703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.173766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.174115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.174361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.174427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.174715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.174787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.175027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.175092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.175378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.175445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.175681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.175745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.175999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.176065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.176373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.176440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.176748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.176812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.177047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.177399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.177731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.177796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.178081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.178145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.178335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.178401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.178690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.178754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.179017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.179082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.179285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.179350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.179639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.179703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.179956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.180021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.180285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.180352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.180640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.180704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.180947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.181012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.181263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.181339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.181639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.181703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.181974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.182039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.182342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.182407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.290 qpair failed and we were unable to recover it. 00:54:10.290 [2024-12-09 05:49:04.182594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.290 [2024-12-09 05:49:04.182659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.182906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.182975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.183269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.183362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.183660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.183724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.183939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.184015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.184248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.184336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.184589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.184655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.184907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.184973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.185230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.185314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.185527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.185561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.185764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.185940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.185975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.186109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.186141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.186247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.186287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.186437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.186468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.186634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.186711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.186959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.187025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.187285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.187330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.187469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.187502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.187714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.187779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.188028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.188061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.188179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.188213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.188326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.188360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.188482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.188518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.188708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.188783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.189084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.189158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.189373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.189407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.189526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.189558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.189783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.189848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.190055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.190091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.190230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.190316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.190493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.190536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.190840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.191135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.191200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.191527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.191593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.191934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.192124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.291 [2024-12-09 05:49:04.192187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.291 qpair failed and we were unable to recover it. 00:54:10.291 [2024-12-09 05:49:04.192475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.192551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.192845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.193186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.193249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.193470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.193538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.193795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.193863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.194155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.194220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.194408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.194441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.194568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.194731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.194763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.194934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.194967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.195100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.195155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.195402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.195435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.195600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.195633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.195804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.195868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.196116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.196180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.196434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.196497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.196700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.196767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.197029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.197094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.197394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.197459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.197700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.197764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.198014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.198077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.198342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.198410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.198664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.198728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.199054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.199326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.199392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.199600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.199665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.199957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.200221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.200318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.200461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.292 qpair failed and we were unable to recover it. 00:54:10.292 [2024-12-09 05:49:04.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.292 [2024-12-09 05:49:04.200671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.200843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.200877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.201828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.201873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.202867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.202901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.203043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.203076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.203266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.203329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.203463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.203496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.203655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.203689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.203857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.203920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.204222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.204300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.204468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.204501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.204622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.204655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.204805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.204839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.204970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.205136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.205281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.205443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.205827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.205860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.206026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.206060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.206203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.206236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.206387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.206420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.206547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.206606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.206796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.206867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.207147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.293 [2024-12-09 05:49:04.207206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.293 qpair failed and we were unable to recover it. 00:54:10.293 [2024-12-09 05:49:04.207435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.207469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.207617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.207651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.207766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.207800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.207968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.208153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.208290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.208460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.208643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.208873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.208914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.209085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.209119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.209292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.209325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.209473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.209507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.209673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.209707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.209848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.209882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.210044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.210078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.210228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.210261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.210448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.210481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.210632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.210678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.210849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.210884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.211092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.211152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.211348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.211382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.211497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.211528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.211658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.211691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.211863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.211934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.212940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.212980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.213226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.213287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.213467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.213608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.213643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.213779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.213821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.214029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.214122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.214372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.214400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.214487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.214514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.214638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.294 [2024-12-09 05:49:04.214684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.294 qpair failed and we were unable to recover it. 00:54:10.294 [2024-12-09 05:49:04.214926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.215299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.215464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.215630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.215789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.215938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.215988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.216134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.216261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.216404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.216566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.216792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.216984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.217287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.217573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.217723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.217887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.217913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.218605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.218652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.218752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.218780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.218915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.218940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.219865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.219891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.220007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.220034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.220133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.220161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.220320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.220347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.220464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.220490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.221210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.221241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.221420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.221447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.221538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.221564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.221675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.221702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.221831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.221860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.222480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.222512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.295 qpair failed and we were unable to recover it. 00:54:10.295 [2024-12-09 05:49:04.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.295 [2024-12-09 05:49:04.222684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.222799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.222824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.222914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.222940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.223862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.223982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.224889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.224914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.225152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.225344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.225478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.225665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.225808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.225988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.226133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.226269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.226406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.226563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.226849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.226885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.227169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.227205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.227361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.227497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.227619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.227645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.227868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.227935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.228166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.228420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.228450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.228577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.228630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.228726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.228752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.296 [2024-12-09 05:49:04.228833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.296 [2024-12-09 05:49:04.228859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.296 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.228947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.228974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.229909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.229994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.230904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.230930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.231072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.231102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.231241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.231268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.231391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.231425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.231563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.231598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.231811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.231878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.232078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.232145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.232395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.232534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.232569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.232714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.232746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.232994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.233061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.233258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.233298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.233389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.233418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.233526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.233584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.233753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.297 [2024-12-09 05:49:04.233799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.297 qpair failed and we were unable to recover it. 00:54:10.297 [2024-12-09 05:49:04.233939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.233986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.234885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.235856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.235890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.236921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.236955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.237122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.237157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.237324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.237357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.237501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.237542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.237655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.237691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.237866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.237901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.238967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.239000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.239138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.239172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.239301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.239340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.239457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.298 [2024-12-09 05:49:04.239482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.298 qpair failed and we were unable to recover it. 00:54:10.298 [2024-12-09 05:49:04.239604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.239650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.239742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.239767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.239849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.239874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.240902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.240931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.241865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.241890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.242864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.242922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.243069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.243096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.243211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.243238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.243341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.243397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.243502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.243537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.243848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.244127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.244194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.244387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.244415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.244607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.244654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.244834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.244869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.245012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.245047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.245184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.245312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.299 [2024-12-09 05:49:04.245339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.299 qpair failed and we were unable to recover it. 00:54:10.299 [2024-12-09 05:49:04.245451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.245476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.245598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.245626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.245711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.245736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.245877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.245924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.246904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.246938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.247879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.247914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.248829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.248866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.249953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.249978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.250101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.250210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.250324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.250438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.300 [2024-12-09 05:49:04.250555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.300 qpair failed and we were unable to recover it. 00:54:10.300 [2024-12-09 05:49:04.250668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.250694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.250811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.250840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.251815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.251842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.252870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.252960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.253866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.253978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.254993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.255135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.255162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.255247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.255283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.255385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.255410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.255496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.301 [2024-12-09 05:49:04.255522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.301 qpair failed and we were unable to recover it. 00:54:10.301 [2024-12-09 05:49:04.255641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.255665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.255781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.255807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.255914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.255938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.256901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.256933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.257906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.257982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.258007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.258085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.258110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.258213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.258238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.258367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.258478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.258503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.259964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.259990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.260875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.302 [2024-12-09 05:49:04.260900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.302 qpair failed and we were unable to recover it. 00:54:10.302 [2024-12-09 05:49:04.261036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.261964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.262944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.262969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.263917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.263943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.264907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.264998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.265941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.265967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.303 [2024-12-09 05:49:04.266080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.303 [2024-12-09 05:49:04.266105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.303 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.266880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.266905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.267875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.267984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.268967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.268993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.269932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.269958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.271045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.271084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.271182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.271209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.271366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.271398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.271570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.271660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.304 [2024-12-09 05:49:04.271861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.304 [2024-12-09 05:49:04.271895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.304 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.272943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.272972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.273171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.273208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.273364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.273399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.273482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.273507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.273643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.273691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.273859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.273906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.274163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.274226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.274424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.274460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.274618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.274677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.274826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.274875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.275861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.275938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.276919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.276964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.277915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.277952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.278056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.278094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.278265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.278305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.278416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.278443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.278529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.278556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.305 [2024-12-09 05:49:04.278686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.305 [2024-12-09 05:49:04.278711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.305 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.278864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.278899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.279943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.279978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.280209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.280371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.280398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.280485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.280511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.280618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.280644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.280787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.281898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.281926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.282890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.282919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.283964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.283998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.306 qpair failed and we were unable to recover it. 00:54:10.306 [2024-12-09 05:49:04.284208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.306 [2024-12-09 05:49:04.284237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.284370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.284395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.284496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.284521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.284664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.284710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.284907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.284953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.285909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.285953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.286915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.286948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.287957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.287988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.288105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.288199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.288398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.288437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.288540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.288569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.288689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.288750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.288874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.288903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.289841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.289868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.290027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.290224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.290262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.290397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.290425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.290514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.290563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.307 [2024-12-09 05:49:04.290771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.307 [2024-12-09 05:49:04.290809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.307 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.291923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.291956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.292067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.292104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.292288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.292331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.295379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.295420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.295539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.295576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.295666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.295692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.295856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.296815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.296982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.297870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.297999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.298910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.298943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.299136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.299343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.299454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.299575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.308 [2024-12-09 05:49:04.299709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.308 qpair failed and we were unable to recover it. 00:54:10.308 [2024-12-09 05:49:04.299848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.299872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.299977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.300783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.300977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.301895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.301952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.302872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.302990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.303877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.303975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.304839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.304976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.305010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.305152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.305182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.305301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.305332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.305429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.305459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.309 [2024-12-09 05:49:04.305578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.309 [2024-12-09 05:49:04.305612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.309 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.305779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.305813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.305922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.305956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.306885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.306976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.307907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.307940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.308829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.308863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.309933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.310131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.310313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.310430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.310569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.310791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.310971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.311010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.311123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.311170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.311255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.311292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.310 qpair failed and we were unable to recover it. 00:54:10.310 [2024-12-09 05:49:04.311411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.310 [2024-12-09 05:49:04.311444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.311543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.311572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.311747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.311781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.311926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.311960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.312163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.312319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.312444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.312578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.312833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.312986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.313831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.313868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.314805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.314840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.315888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.315994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.316166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.316347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.316493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.316655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.316830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.316880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.317001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.317146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.317176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.317282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.311 [2024-12-09 05:49:04.317314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.311 qpair failed and we were unable to recover it. 00:54:10.311 [2024-12-09 05:49:04.317420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.317447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.317556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.317593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.317715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.317744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.317893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.317922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.318908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.318942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.319920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.319956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.320894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.320926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.321865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.321899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.322782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.322961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.312 [2024-12-09 05:49:04.323931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.312 [2024-12-09 05:49:04.323967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.312 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.324915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.325933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.325959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.326917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.326945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.327901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.327937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.328084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.328135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.328258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.328304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.328427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.328471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.328649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.328819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.328852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.313 [2024-12-09 05:49:04.329801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.313 qpair failed and we were unable to recover it. 00:54:10.313 [2024-12-09 05:49:04.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.330918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.330953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.331052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.331086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.331258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.331308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.331424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.331455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.331630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.331682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.331852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.331898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.332961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.332988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.333127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.333361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.333410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.333563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.333748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.333799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.333942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.333972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.334113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.334151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.334283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.334350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.334484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.334529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.334733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.334862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.334897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.335858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.335911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.336096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.336246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.336425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.336605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.336850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.336972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.314 [2024-12-09 05:49:04.337117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.314 [2024-12-09 05:49:04.337145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.314 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.337247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.337283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.337386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.337413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.337528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.337556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.337672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.337704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.337848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.337875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.338919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.338947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.339840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.340019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.340047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.340188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.340220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.340365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.340409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.340568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.340643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.340891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.340958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.341821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.341870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.342045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.342083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.342199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.342227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.342352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.342383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.315 qpair failed and we were unable to recover it. 00:54:10.315 [2024-12-09 05:49:04.342557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.315 [2024-12-09 05:49:04.342602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.342710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.342768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.342910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.342937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.343108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.343248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.343424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.343565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.343764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.343971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.344115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.344227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.344400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.344568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.344787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.344840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.345074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.345119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.345234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.345382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.345427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.345536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.345578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.345735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.345790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.346946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.346974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.347064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.347119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.347238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.347265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.347404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.347439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.347577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.347612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.347760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.348028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.348063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.348223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.348267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.348410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.348440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.348701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.348854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.348924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.349111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.349142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.349288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.349314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.316 [2024-12-09 05:49:04.349420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.316 qpair failed and we were unable to recover it. 00:54:10.316 [2024-12-09 05:49:04.349531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.349570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.349734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.349782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.349922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.349968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.350095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.350218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.350257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.350436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.350477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.350626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.350676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.350830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.350890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.351061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.351118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.351282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.351320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.351461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.351513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.351693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.351741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.351853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.351905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.352881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.352907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.353099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.353234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.353391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.353598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.353865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.353990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.354919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.354954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.355851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.355912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.356054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.356085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.356180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.356217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.356360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.356387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.356520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.317 [2024-12-09 05:49:04.356815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.317 [2024-12-09 05:49:04.356853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.317 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.356989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.357176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.357333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.357501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.357681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.357925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.357959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.358183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.358331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.358468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.358659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.358853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.358999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.359196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.359340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.359478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.359654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.359824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.359868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.360881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.360919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.361963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.361992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.362947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.362981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.363144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.363170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.363263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.363306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.363437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.318 [2024-12-09 05:49:04.363481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.318 qpair failed and we were unable to recover it. 00:54:10.318 [2024-12-09 05:49:04.363614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.363661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.363776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.363819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.363940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.363966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.364137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.364320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.364491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.364718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.364864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.364998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.365839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.365983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.366132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.366318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.366521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.366740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.366872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.366920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.367886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.367920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.368902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.368928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.369008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.319 [2024-12-09 05:49:04.369035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.319 qpair failed and we were unable to recover it. 00:54:10.319 [2024-12-09 05:49:04.369143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.369961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.369994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.370911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.370945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.371117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.371145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.371335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.371412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.371567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.371619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.371771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.371819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.371946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.371975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.372112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.372170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.372268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.372450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.372495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.372724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.372790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.373013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.373082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.373217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.373252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.373401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.373427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.373562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.373748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.373783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.374102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.374131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.374231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.374259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.374399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.374442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.374601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.374629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.374850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.374933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.375140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.375312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.375340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.375480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.375507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.375793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.376115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.376141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.376256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.376288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.376392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.376419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.376534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.376609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.320 [2024-12-09 05:49:04.376869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.320 [2024-12-09 05:49:04.376934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.320 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.377112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.377138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.377281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.377308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.377385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.377548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.377612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.377807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.377857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.378007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.378230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.378256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.378351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.378377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.378529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.378555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.378757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.378786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.379892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.379960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.380093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.380133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.380283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.380313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.380403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.380436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.380540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.380609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.380878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.381004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.381048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.381161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.381188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.381264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.381307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.381426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.381482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.381839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.382924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.382951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.383060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.383100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.383256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.383291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.383428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.383476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.383638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.383689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.383820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.383855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.384001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.384028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.384111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.384138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.384258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.384296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.384498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.321 [2024-12-09 05:49:04.384539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.321 qpair failed and we were unable to recover it. 00:54:10.321 [2024-12-09 05:49:04.384789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.384817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.384935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.384962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.385923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.385962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.386079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.386106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.386197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.386224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.386358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.386403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.386708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.386915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.386943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.387082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.387208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.387398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.387589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.387770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.387974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.388934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.388961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.389922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.389950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.322 [2024-12-09 05:49:04.390838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.322 qpair failed and we were unable to recover it. 00:54:10.322 [2024-12-09 05:49:04.390980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.391867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.391985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.392943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.392969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.393874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.393903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.394988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.395890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.323 [2024-12-09 05:49:04.396689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.323 qpair failed and we were unable to recover it. 00:54:10.323 [2024-12-09 05:49:04.396807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.396836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.396951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.396992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.397950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.397977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.398940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.398992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.399122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.399152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.399286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.399322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.399462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.399489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.399648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.399677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.399825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.399889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.400144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.400212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.400427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.400454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.400596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.400622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.400738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.400764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.400879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.400905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.401873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.401940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.402107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.402150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.324 [2024-12-09 05:49:04.402231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.324 [2024-12-09 05:49:04.402259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.324 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.402370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.402397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.402545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.402571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.402689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.402769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.403057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.403128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.403329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.403356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.403448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.403474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.403654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.403680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.403845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.403912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.404911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.404963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.405200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.405265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.405404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.405430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.405544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.405573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.405652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.405678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.405830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.405878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.406880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.406913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.407908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.407963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.408157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.408221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.408452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.408479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.408607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.408785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.408813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.408987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.409015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.409207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.409236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.409384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.325 [2024-12-09 05:49:04.409411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.325 qpair failed and we were unable to recover it. 00:54:10.325 [2024-12-09 05:49:04.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.409524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.409617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.409645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.409772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.409801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.409994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.410059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.410250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.410292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.410403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.410428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.410518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.410554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.410664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.410714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.410989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.411337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.411364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.411522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.411577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.411698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.411742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.411822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.411852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.411985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.412960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.412989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.413875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.413977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.414108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.414266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.414451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.414568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.414821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.414864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.415073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.415138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.415340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.415378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.415500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.415537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.415757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.415822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.416061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.416124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.416385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.416413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.416541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.416567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.416660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.326 [2024-12-09 05:49:04.416688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.326 qpair failed and we were unable to recover it. 00:54:10.326 [2024-12-09 05:49:04.416969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.417221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.417372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.417539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.417748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.417887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.417914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.418047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.418085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.418201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.418257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.418458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.418665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.418917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.418982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.419249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.419285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.419406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.419436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.419580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.419839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.419906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.420198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.420266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.420436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.420463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.420645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.420709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.420983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.421047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.421322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.421351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.421474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.421503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.421617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.421645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.421743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.421771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.422040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.422103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.422278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.422431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.422459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.422681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.422745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.422971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.423035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.423249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.423285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.423400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.423433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.423578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.423606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.423714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.423742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.423947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.424206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.424481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.424631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.424779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.424824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.425034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.327 [2024-12-09 05:49:04.425098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.327 qpair failed and we were unable to recover it. 00:54:10.327 [2024-12-09 05:49:04.425296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.425325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.425427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.425455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.425575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.425603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.425837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.426891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.426955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.427164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.427227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.427384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.427412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.427617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.427843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.427897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.427989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.428849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.428878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.429888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.429996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.430866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.430990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.328 [2024-12-09 05:49:04.431018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.328 qpair failed and we were unable to recover it. 00:54:10.328 [2024-12-09 05:49:04.431099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.431938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.431967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.432306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.432495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.432615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.432792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.432966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.433030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.433265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.433552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.433617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.433907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.433979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.434264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.434339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.434486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.434514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.434626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.434682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.434909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.434972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.435185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.435213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.435306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.435335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.435431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.435460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.435661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.435725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.435982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.436047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.436336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.436365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.436511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.436540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.436726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.436789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.437105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.437169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.437353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.437381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.437526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.437555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.437740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.437803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.438099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.438376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.438410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.438532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.438562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.438713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.438788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.439057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.439121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.439349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.439378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.439495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.439524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.439648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.439712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.329 [2024-12-09 05:49:04.440059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.329 qpair failed and we were unable to recover it. 00:54:10.329 [2024-12-09 05:49:04.440238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.440266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.440392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.440420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.440532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.440560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.440795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.440859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.441147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.441213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.441419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.441448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.441663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.441726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.442013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.442079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.442398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.442465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.442720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.442785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.443074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.443138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.443385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.443451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.443627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.443691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.443973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.444038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.444329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.444394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.444683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.444748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.445056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.445119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.445365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.445431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.445680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.445747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.446011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.446076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.446320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.446384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.446675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.446739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.447056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.447125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.447380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.447459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.447752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.448112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.448177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.448458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.448523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.448804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.449026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.449357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.449423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.449681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.449745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.449999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.450064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.450348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.450413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.450653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.450718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.450961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.451025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.451316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.451400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.451651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.330 [2024-12-09 05:49:04.451719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.330 qpair failed and we were unable to recover it. 00:54:10.330 [2024-12-09 05:49:04.452006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.452071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.452368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.452434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.452738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.452803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.453098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.453163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.453432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.453498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.453715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.453779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.454075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.454140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.454345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.454408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.454699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.454763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.455021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.455086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.455390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.455464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.455704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.455771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.456074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.456139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.456436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.456502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.456796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.457165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.457229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.457448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.457703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.457767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.458050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.458115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.458403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.458470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.458757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.458821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.459074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.459138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.459387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.459455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.459647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.459714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.459898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.459965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.460244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.460360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.460617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.460685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.460970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.461034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.461219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.461300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.461656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.461967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.462031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.462300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.462366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.462666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.462982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.463045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.463339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.463405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.463651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.331 [2024-12-09 05:49:04.463714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.331 qpair failed and we were unable to recover it. 00:54:10.331 [2024-12-09 05:49:04.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.464073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.464366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.464432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.464687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.464751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.465020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.465084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.465383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.465449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.465703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.465771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.466027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.466091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.466374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.466442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.466737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.466803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.467086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.467453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.467809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.467874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.468116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.468183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.468489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.468556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.468873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.469125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.469189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.469463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.469530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.469803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.469867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.470169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.470234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.470497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.470563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.470808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.470874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.471144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.471210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.471480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.471546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.471832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.471895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.472141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.472205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.472529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.472894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.472958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.473172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.473236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.473510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.473868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.473932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.474226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.474330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.474544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.474608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.474829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.474893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.475156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.475219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.475522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.475588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.475885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.475950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.476192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.476255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.476498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.476563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.476788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.476851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.477151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.477216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.477559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.477625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.332 qpair failed and we were unable to recover it. 00:54:10.332 [2024-12-09 05:49:04.477919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.332 [2024-12-09 05:49:04.477984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.478268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.478353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.478599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.478665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.478978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.479042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.479329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.479396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.479649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.479715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.480005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.480069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.480374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.480441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.480661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.480726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.480971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.481037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.481327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.481394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.481646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.481711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.481917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.481980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.482264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.482344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.482629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.482695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.482902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.482967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.483254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.483349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.483641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.483707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.483921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.483987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.484173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.484239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.484550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.484615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.484822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.484886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.485168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.485233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.485463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.485529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.485817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.485881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.486183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.486247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.486478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.486543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.486730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.486797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.333 qpair failed and we were unable to recover it. 00:54:10.333 [2024-12-09 05:49:04.487065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.333 [2024-12-09 05:49:04.487130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.487372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.487439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.487657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.487723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.488013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.488077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.488321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.488387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.488622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.488687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.488969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.489032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.489291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.489357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.489645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.489711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.489930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.489994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.490235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.490315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.490600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.490668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.490861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.490925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.491154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.491219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.491490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.491745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.491812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.492052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.492118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.492343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.492410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.492591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.492655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.492939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.493004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.493172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.493241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.493478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.493544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.493732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.493799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.493993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.494064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.494359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.494425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.494712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.494779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.495044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.495109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.495360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.495633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.495698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.495916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.495990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.496301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.496369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.496612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.496678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.496938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.497002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.497308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.497390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.497625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.497690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.497955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.498019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.609 [2024-12-09 05:49:04.498311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.609 [2024-12-09 05:49:04.498377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.609 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.498671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.498736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.498996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.499062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.499316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.499383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.499698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.499961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.500026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.500258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.500345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.500618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.500687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.500983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.501307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.501380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.501674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.501738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.501995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.502059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.502360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.502426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.502712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.502775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.503025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.503090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.503353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.503419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.503656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.503720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.504017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.504081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.504348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.504663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.504726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.504979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.505053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.505676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.505739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.506003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.506068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.506322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.506390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.506588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.506656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.506943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.507007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.507216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.507297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.507568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.507631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.507925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.507989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.508241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.508337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.508559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.508623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.508912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.508976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.509287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.509637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.509702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.509987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.510054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.510249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.510349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.510611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.510676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.510960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.610 [2024-12-09 05:49:04.511024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.610 qpair failed and we were unable to recover it. 00:54:10.610 [2024-12-09 05:49:04.511306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.511375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.511634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.511699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.511982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.512045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.512372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.512633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.512698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.512938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.513001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.513249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.513342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.513636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.513701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.513981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.514045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.514327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.514591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.514655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.514946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.515011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.515305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.515371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.515581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.515645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.515934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.516001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.516302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.516367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.516664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.516729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.517023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.517088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.517346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.517412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.517700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.517763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.518051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.518115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.518403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.518469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.518769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.518844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.519152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.519216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.519449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.519515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.519762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.519827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.520116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.520180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.520452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.520517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.520769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.520834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.521136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.521199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.521484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.521551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.521759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.521823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.522127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.522191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.522526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.522729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.523043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.523107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.523357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.523424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.523679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.611 [2024-12-09 05:49:04.523743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.611 qpair failed and we were unable to recover it. 00:54:10.611 [2024-12-09 05:49:04.524027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.524093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.524397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.524462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.524705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.524769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.525135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.525427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.525493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.525751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.525815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.526103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.526362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.526427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.526663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.526727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.526990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.527309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.527375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.527591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.527658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.527960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.528025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.528380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.528675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.528738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.529009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.529073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.529337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.529403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.529686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.529750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.530017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.530080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.530369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.530433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.530685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.530749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.531047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.531110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.531403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.531469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.531780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.531843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.532064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.532127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.532392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.532458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.532765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.532830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.533075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.533139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.533380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.533448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.533712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.533776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.533969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.534032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.534259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.534385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.534718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.534942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.535005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.535265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.535352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.535639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.535704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.535960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.536024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.536287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.536357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.612 [2024-12-09 05:49:04.536549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.612 [2024-12-09 05:49:04.536616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.612 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.536906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.536970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.537260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.537344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.537610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.537675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.537990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.538055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.538354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.538423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.538758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.539071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.539135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.539396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.539462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.539708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.539772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.540064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.540128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.540387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.540452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.540648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.540713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.541002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.541067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.541366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.541441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.541734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.541799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.542091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.542156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.542409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.542474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.542835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.543152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.543427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.543492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.543746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.543812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.544172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.544492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.544562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.544828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.544895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.545260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.545543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.545609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.545916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.545980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.546242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.546349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.546650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.546714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.547023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.547256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.547347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.547678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.547976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.613 [2024-12-09 05:49:04.548040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.613 qpair failed and we were unable to recover it. 00:54:10.613 [2024-12-09 05:49:04.548257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.548351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.548655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.548979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.549183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.549251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.549535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.549600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.549903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.549967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.550261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.550355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.550645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.550709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.550965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.551030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.551308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.551374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.551671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.551737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.551989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.552054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.552318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.552383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.552634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.552697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.553251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.553329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.553611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.553872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.553936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.554226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.554309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.554605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.554670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.554969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.555032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.555339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.555415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.555677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.555743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.555985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.556049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.556368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.556725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.557015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.557081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.557396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.557604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.557669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.557906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.557970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.558261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.558348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.558597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.558663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.558919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.558984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.559239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.559342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.559606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.559671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.559942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.560007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.560224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.560303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.560602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.560666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.614 [2024-12-09 05:49:04.560925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.614 [2024-12-09 05:49:04.560989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.614 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.561235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.561334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.561649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.561715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.561985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.562050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.562312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.562378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.562661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.562727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.562975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.563335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.563400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.563652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.563716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.564016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.564307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.564384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.564672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.564736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.565044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.565108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.565361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.565426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.565655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.565719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.566012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.566076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.566365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.566431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.566702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.566767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.567016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.567080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.567313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.567379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.567612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.567900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.567964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.568209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.568289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.568481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.568545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.568844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.568909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.569203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.569267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.569544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.569623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.569878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.569943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.570190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.570255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.570502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.570574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.570860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.570925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.571232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.571339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.571641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.571705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.571963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.572027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.572264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.572348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.572590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.572932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.572995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.573244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.573339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.573608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.615 [2024-12-09 05:49:04.573672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.615 qpair failed and we were unable to recover it. 00:54:10.615 [2024-12-09 05:49:04.573918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.573982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.574296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.574361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.574668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.574955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.575019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.576050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.576243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.576419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.576602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.576805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.576979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.577159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.577297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.577453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.577617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.577886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.578948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.578976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.579915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.579944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.580884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.580975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.581004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.581136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.581164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.581290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.581322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.581547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.581605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.616 [2024-12-09 05:49:04.581802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.616 qpair failed and we were unable to recover it. 00:54:10.616 [2024-12-09 05:49:04.581930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.581971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.582988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.583900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.584090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.584119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.584251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.584291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.584641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.584707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.584954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.585335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.585378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.585612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.585668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.585829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.586133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.586161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.586286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.586327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.586503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.586565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.586786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.586831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.587850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.587900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.588851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.588879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.589001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.589029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.617 qpair failed and we were unable to recover it. 00:54:10.617 [2024-12-09 05:49:04.589170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.617 [2024-12-09 05:49:04.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.589368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.589510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.589686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.589833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.589862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.589986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.590014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.590091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.590119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.590290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.590549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.590625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.590823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.590891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.591188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.591256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.591474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.591503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.591727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.591793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.592071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.592149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.592353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.592383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.592507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.592547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.592845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.592926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.593166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.593238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.593411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.593440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.593538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.593567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.593714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.593966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.594033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.594222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.594250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.594418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.594447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.594646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.594712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.595015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.595081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.595335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.595365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.595509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.595538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.595747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.595775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.595867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.595897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.596167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.596234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.596423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.596454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.596589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.596641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.596918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.596985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.618 qpair failed and we were unable to recover it. 00:54:10.618 [2024-12-09 05:49:04.597158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.618 [2024-12-09 05:49:04.597187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.597338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.597372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.597499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.597529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.597634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.597663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.597782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.597812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.597975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.598297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.598455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.598571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.598711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.598855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.598885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.599038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.599091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.599205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.599235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.599341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.599371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.599551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.599607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.599794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.599856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.600861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.600889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.601861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.601977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.602903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.602931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.603115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.603193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.603440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.619 [2024-12-09 05:49:04.603612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.619 [2024-12-09 05:49:04.603677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.619 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.603942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.604006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.604308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.604336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.604452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.604480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.604599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.604627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.604773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.604801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.604950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.605034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.605219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.605249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.605387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.605417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.605542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.605790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.605863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.606108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.606175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.606424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.606552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.606582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.606703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.606733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.606916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.606983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.607234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.607328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.607479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.607508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.607691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.607761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.608007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.608070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.608333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.608362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.608487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.608515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.608636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.608664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.608752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.608781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.609002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.609072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.609322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.609351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.609498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.609530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.609816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.609884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.610158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.610222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.610553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.610617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.610869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.610934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.611178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.611243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.611513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.611578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.611933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.612214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.612293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.612602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.620 [2024-12-09 05:49:04.612668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.620 qpair failed and we were unable to recover it. 00:54:10.620 [2024-12-09 05:49:04.612969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.613034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.613240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.613330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.613616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.613696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.613905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.613971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.614574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.614641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.614951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.615246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.615327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.615585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.615920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.616118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.616185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.616456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.616521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.616808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.616873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.617057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.617121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.617371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.617436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.617734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.617799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.618093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.618159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.618431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.618497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.618719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.618784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.619065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.619130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.619431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.619496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.619788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.619852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.620103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.620168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.620412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.620480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.620781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.620846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.621098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.621163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.621474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.621541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.621734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.621799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.622093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.622158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.622425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.622491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.622777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.622842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.623140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.623204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.623482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.623549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.623851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.623916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.624164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.624231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.624512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.624577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.624803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.624867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.625109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.621 [2024-12-09 05:49:04.625174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.621 qpair failed and we were unable to recover it. 00:54:10.621 [2024-12-09 05:49:04.625476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.625552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.625849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.625914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.626293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.626589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.626652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.626909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.626984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.627333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.627399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.627702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.627766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.628057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.628122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.628407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.628472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.628756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.628820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.629076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.629140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.629442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.629506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.629756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.629819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.630066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.630134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.630384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.630449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.630729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.630793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.631119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.631309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.631376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.631660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.631725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.632013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.632078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.632366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.632430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.632692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.632759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.633047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.633112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.633473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.633668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.633734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.633986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.634050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.634368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.634434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.634734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.634799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.635059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.635123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.635369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.635435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.635698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.635762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.636063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.636135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.636504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.622 [2024-12-09 05:49:04.636766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.622 [2024-12-09 05:49:04.636833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.622 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.637131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.637196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.637469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.637535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.637827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.637892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.638126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.638191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.638425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.638490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.638659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.638724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.639021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.639086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.639368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.639433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.639672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.639739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.640029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.640094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.640390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.640466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.640724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.640789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.641089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.641153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.641341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.641406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.641658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.641722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.641978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.642043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.642300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.642371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.642620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.642688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.642971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.643036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.643337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.643403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.643692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.643757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.644062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.644126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.644416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.644482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.644749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.644814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.645055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.645123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.645439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.645687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.645753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.646009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.646074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.646360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.646435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.646629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.646692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.646931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.646997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.647289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.647366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.647584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.647649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.647912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.647976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.648263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.648344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.623 [2024-12-09 05:49:04.648642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.623 [2024-12-09 05:49:04.648706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.623 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.648955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.649019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.649328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.649677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.649742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.649993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.650057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.650267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.650350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.650602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.650666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.650933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.650997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.651334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.651408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.651700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.652062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.652126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.652418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.652484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.652778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.652841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.653134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.653203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.653522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.653588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.653877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.653958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.654246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.654328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.654622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.654688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.654983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.655047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.655300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.655366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.655659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.655723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.656020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.656084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.656338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.656407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.656689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.656754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.657049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.657112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.657400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.657465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.657715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.657780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.658025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.658088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.658301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.658375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.658632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.658698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.658940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.659007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.659330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.659396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.659750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.660036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.660101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.660402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.660468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.660766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.660831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.661128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.661193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.661461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.661526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.624 [2024-12-09 05:49:04.661772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.624 [2024-12-09 05:49:04.661837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.624 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.662010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.662075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.662370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.662435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.662737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.662802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.663098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.663163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.663433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.663499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.663755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.663822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.664034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.664099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.664400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.664466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.664821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.665064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.665129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.665414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.665479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.665779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.665844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.666095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.666158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.666409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.666475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.666784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.666849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.667133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.667197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.667515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.667604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.667909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.667975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.668262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.668352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.668605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.668670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.668937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.669001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.669246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.669331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.669638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.669703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.669954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.670020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.670314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.670380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.670645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.670710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.670960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.671025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.671318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.671385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.671621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.671688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.671986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.672051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.672373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.672440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.672691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.672757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.673051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.673117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.673412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.673477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.673719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.673784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.674083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.674148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.674382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.674447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.625 [2024-12-09 05:49:04.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.625 [2024-12-09 05:49:04.674799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.625 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.675099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.675166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.675443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.675509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.675702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.675769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.676058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.676385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.676451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.676746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.676811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.677071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.677295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.677360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.677630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.677694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.677980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.678046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.678374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.678639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.678703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.678958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.679026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.679334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.679664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.679729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.680015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.680080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.680322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.680389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.680630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.680695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.680977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.681052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.681313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.681379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.681641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.681705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.681995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.682060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.682364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.682430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.682732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.682796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.683029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.683093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.683383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.683450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.683747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.683812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.684063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.684128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.684413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.684479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.684725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.684790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.685079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.685143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.685406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.685473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.626 qpair failed and we were unable to recover it. 00:54:10.626 [2024-12-09 05:49:04.685780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.626 [2024-12-09 05:49:04.685844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.686026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.686380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.686447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.686702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.686767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.686976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.687040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.687327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.687394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.687681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.687746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.688027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.688092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.688306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.688373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.688603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.688667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.688914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.688979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.689222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.689310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.689631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.689695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.689988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.690053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.690298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.690363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.690607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.690671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.690925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.690990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.691240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.691320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.691570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.691635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.691923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.691988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.692233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.692315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.692672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.692971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.693036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.693293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.693361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.693625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.693690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.693973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.694039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.694257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.694349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.694645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.694710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.694908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.694977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.695240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.695339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.695637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.695703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.695994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.696059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.696360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.696426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.627 [2024-12-09 05:49:04.696664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.627 [2024-12-09 05:49:04.696728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.627 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.696969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.697034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.697344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.697409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.697603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.697670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.697895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.697959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.698207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.698290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.698554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.698619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.698917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.698982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.699288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.699354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.699604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.699669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.699910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.699973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.700266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.700345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.700644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.700709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.700960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.701024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.701314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.701380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.701580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.701645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.701934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.701998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.702248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.702348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.702608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.702673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.702886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.702952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.703258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.703339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.703609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.703674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.703962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.704029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.704250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.704331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.704534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.704603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.704892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.704958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.705248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.705334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.705630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.705696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.705988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.706053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.706353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.706420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.706711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.706775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.707031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.707096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.707312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.707378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.628 [2024-12-09 05:49:04.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.628 [2024-12-09 05:49:04.707671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.628 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.707927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.707996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.708251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.708354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.708651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.708717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.709013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.709078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.709328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.709397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.709660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.709725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.709973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.710041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.710349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.710416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.710614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.710681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.710970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.711035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.711342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.711410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.711711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.711777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.712063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.712440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.712506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.712772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.712837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.713087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.713151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.713405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.713471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.713802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.714052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.714119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.714422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.714489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.714786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.714852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.715148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.715212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.715496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.715562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.715819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.715885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.716184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.716249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.716569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.716634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.716934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.717000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.717323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.717390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.717648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.717714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.717952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.718017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.718309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.718375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.718735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.718985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.719050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.719354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.719420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.719725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.719789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.720075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.720140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.720398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.629 [2024-12-09 05:49:04.720465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.629 qpair failed and we were unable to recover it. 00:54:10.629 [2024-12-09 05:49:04.720761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.720825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.721065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.721131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.721417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.721501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.721792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.721858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.722123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.722189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.722836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.722900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.723187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.723253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.723549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.723615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.723900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.723964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.724262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.724357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.724596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.724661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.724945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.725010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.725306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.725372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.725627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.725693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.725986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.726051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.726246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.726323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.726617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.726682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.726973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.727038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.727329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.727417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.727675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.727740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.728056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.728307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.728373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.729051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.729306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.729372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.729626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.729691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.729995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.730059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.730310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.730376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.730651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.730717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.730940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.731009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.731260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.731341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.731533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.630 [2024-12-09 05:49:04.731597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.630 qpair failed and we were unable to recover it. 00:54:10.630 [2024-12-09 05:49:04.731884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.731949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.732241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.732337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.732631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.732695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.732981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.733046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.733307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.733377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.733635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.733953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.734299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.734626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.734691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.734943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.735324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.735391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.735634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.735698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.735962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.736023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.736325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.736388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.736747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.737002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.737062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.737363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.737427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.737669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.737729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.738013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.738074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.738441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.738683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.738743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.739040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.739100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.739436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.739712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.740023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.740084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.740377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.740441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.740727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.740983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.741044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.741295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.741358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.741563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.741629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.741915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.741979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.742264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.742347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.742568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.742632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.742873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.742941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.743241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.743332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.743595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.743659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.743939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.744013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.631 [2024-12-09 05:49:04.744254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.631 [2024-12-09 05:49:04.744363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.631 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.744602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.744667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.744980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.745147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.745211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.745418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.745485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.745733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.745798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.746097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.746160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.746440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.746506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.746762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.746830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.747025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.747091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.747343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.747408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.747681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.747997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.748064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.748367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.748433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.748747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.748811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.749110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.749177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.749487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.749553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.749766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.749842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.750136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.750208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.750445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.750511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.750725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.750792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.750994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.751257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.751338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.751623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.751688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.751883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.751948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.752198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.752263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.752533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.752601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.752847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.752915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.753158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.753225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.753490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.753555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.753811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.753878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.754167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.754232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.632 [2024-12-09 05:49:04.754461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.632 [2024-12-09 05:49:04.754527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.632 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.754780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.754845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.755095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.755161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.755462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.755530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.755774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.755838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.756122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.756187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.756450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.756516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.756772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.756848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.757136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.757200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.757514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.757580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.757877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.757941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.758132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.758199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.758468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.758535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.758831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.759155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.759219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.759458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.759524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.759806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.759870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.760111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.760176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.760495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.760562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.760809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.760874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.761095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.761159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.761419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.761484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.761785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.761849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.762139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.762205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.762461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.762528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.762821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.762885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.763182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.763246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.763507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.763573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.763770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.763836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.764085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.764150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.764459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.764528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.764775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.764842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.765103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.765169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.765479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.765545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.633 [2024-12-09 05:49:04.765846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.633 [2024-12-09 05:49:04.765912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.633 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.766163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.766228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.766472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.766538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.766721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.766788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.767080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.767145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.767401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.767468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.767773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.767837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.768126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.768190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.768471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.768537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.768786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.768852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.769168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.769476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.769895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.770137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.770212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.770451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.770519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.770769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.770838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.771125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.771192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.771442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.771508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.771793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.771858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.772060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.772127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.772324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.772391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.772634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.772699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.772889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.772954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.773246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.773556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.773622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.773916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.773983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.774169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.774561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.774627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.774937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.775002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.775304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.775371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.775627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.775693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.775887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.775953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.776243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.776343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.776643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.776708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.776949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.777014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.777259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.777343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.777636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.634 [2024-12-09 05:49:04.777831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.634 [2024-12-09 05:49:04.777898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.634 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.778187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.778251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.778570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.778634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.778899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.778964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.779210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.779292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.779521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.779586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.779845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.779911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.780198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.780262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.780604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.780669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.780971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.781035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.781296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.781365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.781614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.781680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.781918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.781983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.782268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.782348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.782568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.782634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.782845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.782910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.783122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.783197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.783430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.783498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.783785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.784044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.784112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.784305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.784373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.784663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.784728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.784920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.784986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.785235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.785616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.785682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.785982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.786047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.786351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.786417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.786664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.786730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.787017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.787082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.787300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.787365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.787587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.787653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.787949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.788015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.788259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.788361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.788605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.788670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.788980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.789253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.789336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.789557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.789622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.789916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.789982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.635 [2024-12-09 05:49:04.790245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.635 [2024-12-09 05:49:04.790328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.635 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.790632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.790956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.791020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.791290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.791359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.791607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.791672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.791932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.792000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.792244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.792326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.792615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.792681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.792936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.793003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.793252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.793335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.793540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.793605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.793830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.793894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.794090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.794153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.794361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.794425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.794713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.794775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.795060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.795125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.795424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.795490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.795682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.795746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.795957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.796034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.796300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.796369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.796625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.796692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.796982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.797048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.797304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.797371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.797562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.797627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.797941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.798198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.798266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.798552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.798618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.798862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.798928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.799173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.799239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.799469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.799536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.799777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.799842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.800087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.800154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.800438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.800505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.800751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.800816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.801131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.801357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.801423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.801675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.801742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.801992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.802057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.636 [2024-12-09 05:49:04.802297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.636 [2024-12-09 05:49:04.802363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.636 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.802622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.802687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.802926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.802992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.803190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.803255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.803530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.803596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.803820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.803885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.804068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.804132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.804397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.804464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.804764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.804831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.805129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.805193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.805437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.805503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.805742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.805807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.806089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.806153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.806451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.806517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.806813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.806878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.807168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.807233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.807474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.807540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.807725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.807791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.807989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.808055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.808286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.808356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.808601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.808675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.808898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.808963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.809223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.809303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.809555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.809620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.809909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.809974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.810231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.810311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.810603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.810667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.810853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.810919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.811169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.811237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.811538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.811603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.811894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.812269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.812353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.812641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.812705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.812890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.637 [2024-12-09 05:49:04.812954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.637 qpair failed and we were unable to recover it. 00:54:10.637 [2024-12-09 05:49:04.813162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.813508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.813573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.813774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.813839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.814056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.814121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.814416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.814773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.814838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.815090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.815156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.815457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.815524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.815781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.815847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.816093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.816159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.816396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.816463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.816758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.816824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.817080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.817145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.817346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.817412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.817615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.817683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.638 [2024-12-09 05:49:04.817894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.638 [2024-12-09 05:49:04.817959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.638 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.818175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.818242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.818511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.818579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.818773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.818839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.819125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.819191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.819459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.819525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.819772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.819838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.820126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.820192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.820469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.820536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.820800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.820864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.821148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.821212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.821538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.821679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.822000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.822100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.822392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.822469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.822730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.822798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.823024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.823094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.823359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.823427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.823697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.823763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.824025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.824099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.824353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.824420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.824669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.824734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.824973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.825038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.825285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.825368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.825673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.825744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.826032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.826373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.826450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.826663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.826731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.827018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.827085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.827379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.827447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.827750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.827817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.912 qpair failed and we were unable to recover it. 00:54:10.912 [2024-12-09 05:49:04.828068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.912 [2024-12-09 05:49:04.828134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.828453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.828751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.828817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.829083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.829151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.829425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.829493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.829732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.829797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.830013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.830077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.830297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.830381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.830625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.830695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.830955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.831024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.831337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.831427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.831681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.831748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.832036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.832104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.832496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.832689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.832771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.833074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.833140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.833432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.833500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.833799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.833865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.834117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.834184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.834482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.834550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.834853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.834919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.835161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.835238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.835515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.835582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.835885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.835952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.836255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.836339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.836571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.836637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.836928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.836994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.837248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.837338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.837585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.837652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.837894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.837959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.838261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.838351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.838651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.838718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.839000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.839065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.839349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.839418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.839779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.840040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.840106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.840289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.840357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.840594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.913 [2024-12-09 05:49:04.840664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.913 qpair failed and we were unable to recover it. 00:54:10.913 [2024-12-09 05:49:04.840948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.841014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.841307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.841375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.841635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.841704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.841965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.842033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.842321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.842389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.842678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.842744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.842999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.843065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.843350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.843418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.843717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.843782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.844057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.844122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.844383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.844452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.844673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.844739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.844982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.845047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.845327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.845397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.845623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.845689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.845890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.845956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.846252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.846336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.846625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.846690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.846883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.846949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.847201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.847267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.847584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.847649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.847901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.847968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.848261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.848349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.848609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.848937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.849004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.849306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.849375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.849664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.849729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.850013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.850078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.850375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.850444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.850738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.850814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.851055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.851120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.851389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.851456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.851700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.851768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.852063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.852128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.852439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.852506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.852727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.852793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.853055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.853121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.853386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.914 [2024-12-09 05:49:04.853458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.914 qpair failed and we were unable to recover it. 00:54:10.914 [2024-12-09 05:49:04.853707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.853772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.854009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.854075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.854326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.854395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.854692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.854758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.855004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.855070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.855360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.855428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.855721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.855786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.855994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.856059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.856315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.856385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.856631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.856698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.856978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.857043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.857263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.857348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.857697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.858019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.858397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.858468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.858758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.859037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.859102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.859350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.859416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.859674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.860002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.860071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.860353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.860419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.860680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.860748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.860986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.861051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.861327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.861392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.861649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.861714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.861954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.862018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.862305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.862625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.862692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.862919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.863297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.863367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.863660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.863724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.863946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.864010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.864308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.864373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.864676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.864743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.865001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.865066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.865230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.865609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.865674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.865936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.915 [2024-12-09 05:49:04.866000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.915 qpair failed and we were unable to recover it. 00:54:10.915 [2024-12-09 05:49:04.866248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.866325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.866576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.866651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.867306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.867564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.867927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.867993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.868307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.868379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.868646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.868710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.868919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.868982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.869264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.869352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.869667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.869732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.870017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.870085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.870348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.870415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.870721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.870796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.871009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.871076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.871383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.871449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.871657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.871722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.871904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.871968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.872222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.872300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.872565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.872630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.872877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.872941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.873187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.873469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.873534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.873834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.873898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.874107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.874485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.874561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.874809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.874873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.875136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.875201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.875505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.875581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.875812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.875878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.876098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.876163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.876483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.916 [2024-12-09 05:49:04.876548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.916 qpair failed and we were unable to recover it. 00:54:10.916 [2024-12-09 05:49:04.876758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.876826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.877035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.877100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.877336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.877402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.877699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.877763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.878017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.878082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.878374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.878439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.878628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.878692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.878886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.878949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.879196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.879261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.879521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.879586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.879916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.880153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.880218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.880484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.880551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.880817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.880881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.881186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.881250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.881574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.881806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.881870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.882121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.882186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.882492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.882843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.882907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.883263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.883573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.883944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.884007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.884263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.884348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.884625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.884690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.884887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.884950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.885242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.885351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.885596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.885663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.885908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.885972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.886229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.886317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.886614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.886677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.886976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.887234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.887316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.887608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.887671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.887955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.888018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.888321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.888396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.888696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.888760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.888963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.917 [2024-12-09 05:49:04.889054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.917 qpair failed and we were unable to recover it. 00:54:10.917 [2024-12-09 05:49:04.889298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.889364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.889622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.889686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.889925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.889989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.890240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.890318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.890548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.890611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.890774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.890837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.891136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.891200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.891454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.891519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.891812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.891883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.892186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.892250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.892442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.892506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.892753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.892818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.893059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.893127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.893407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.893474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.893719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.893784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.894069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.894133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.894470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.894684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.894749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.895066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.895428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.895732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.895807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.896108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.896171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.896423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.896489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.896737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.896805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.897013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.897078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.897367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.897700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.897773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.898073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.898138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.898428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.898493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.898751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.898816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.899026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.899090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.899354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.899419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.899672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.899735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.900028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.900092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.900382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.900447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.900746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.900820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.901068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.901132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.918 [2024-12-09 05:49:04.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.918 qpair failed and we were unable to recover it. 00:54:10.918 [2024-12-09 05:49:04.901756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.901830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.902082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.902146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.902444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.902793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.902857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.903119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.903183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.903487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.903551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.903904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.904159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.904223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.904470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.904536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.904765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.904829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.905113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.905177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.905481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.905547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.905761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.905825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.906083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.906147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.906437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.906699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.906762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.907016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.907083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.907374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.907439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.907692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.907758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.908054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.908130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.908432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.908498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.908738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.908802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.909042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.909108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.909411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.909487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.909756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.909821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.910128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.910199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.910520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.910585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.910833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.910897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.911144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.911207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.911458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.911534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.911728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.911793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.912082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.912145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.912402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.912468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.912778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.912849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.913108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.913174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.913414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.913479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.913761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.913827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.914073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.914140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.919 qpair failed and we were unable to recover it. 00:54:10.919 [2024-12-09 05:49:04.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.919 [2024-12-09 05:49:04.914493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.914749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.914815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.915163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.915461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.915527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.915825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.915895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.916182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.916247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.916495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.916559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.916737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.916799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.917071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.917135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.917387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.917452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.917707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.917772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.918016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.918080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.918366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.918431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.918717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.918780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.919064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.919127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.919433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.919498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.919753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.919817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.920110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.920187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.920470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.920537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.920841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.920905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.921146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.921210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.921492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.921557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.921843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.921906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.922150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.922214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.922460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.922526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.922757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.922820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.923114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.923178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.923492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.923559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.923860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.923933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.924229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.924310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.924559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.924623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.924925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.924997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.925255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.925363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.925668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.925743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.926002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.926065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.926353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.926419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.926719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.926784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.927031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.927097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.927350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.920 [2024-12-09 05:49:04.927416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.920 qpair failed and we were unable to recover it. 00:54:10.920 [2024-12-09 05:49:04.927715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.927789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.928072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.928135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.928409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.928705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.928781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.929026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.929090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.929382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.929447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.929738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.929802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.930108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.930172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.930382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.930447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.930705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.930768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.931055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.931119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.931405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.931472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.931772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.931845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.932105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.932168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.932412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.932477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.932773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.932847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.933202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.933490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.933555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.933802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.933868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.934152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.934218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.934492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.934568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.934868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.934941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.935200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.935264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.935664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.935928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.935992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.936296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.936367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.936659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.936723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.936967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.937030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.937323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.937388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.937641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.937705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.937991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.938056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.938348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.938413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.938697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.938762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.921 [2024-12-09 05:49:04.939060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.921 [2024-12-09 05:49:04.939124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.921 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.939448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.939522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.939806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.939870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.940085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.940439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.940504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.940752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.940816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.941066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.941129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.941420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.941485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.941768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.941832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.942145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.942438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.942504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.942761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.942825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.943085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.943148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.943432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.943497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.943782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.943847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.944114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.944178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.944448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.944513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.944723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.944790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.945091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.945166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.945515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.945834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.945898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.946159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.946222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.946500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.946564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.946799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.946863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.947152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.947215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.947487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.947552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.947808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.947873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.948087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.948154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.948438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.948525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.948827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.948892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.949193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.949266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.949543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.949607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.949893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.949957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.950200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.950264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.950478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.950542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.950823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.950889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.951184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.951248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.951624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.951866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.951929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.922 qpair failed and we were unable to recover it. 00:54:10.922 [2024-12-09 05:49:04.952227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.922 [2024-12-09 05:49:04.952318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.952592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.952656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.952950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.953023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.953323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.953389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.953686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.953762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.954058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.954122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.954347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.954414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.954699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.954762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.955006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.955070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.955298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.955363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.955660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.955733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.955992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.956056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.956355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.956430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.956777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.957040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.957242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.957319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.957556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.957920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.957983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.958236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.958319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.958573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.958636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.958877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.958940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.959138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.959202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.959494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.959789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.960102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.960169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.960430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.960495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.960689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.960752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.961010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.961074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.961380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.961446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.961792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.962111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.962185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.962497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.962561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.962799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.962863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.963154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.963218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.963483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.963547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.963735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.963799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.964053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.964117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.964322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.964387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.923 [2024-12-09 05:49:04.964736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.923 qpair failed and we were unable to recover it. 00:54:10.923 [2024-12-09 05:49:04.964993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.965057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.965348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.965413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.965636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.965700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.965870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.965933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.966213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.966302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.966578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.966643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.966945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.967019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.967231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.967315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.967582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.967646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.967943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.968016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.968313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.968379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.968666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.969009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.969073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.969331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.969396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.969664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.969728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.969962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.970026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.970234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.970312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.970526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.970589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.970859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.970931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.971145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.971209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.971513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.971578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.971877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.971941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.972239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.972327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.972570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.972635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.972922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.972985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.973230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.973320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.973575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.973639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.973854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.973917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.974206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.974270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.974541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.974605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.974839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.974902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.975139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.975202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.975532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.975608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.975861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.975927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.976211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.976294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.976512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.976932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.977185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.977251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.924 [2024-12-09 05:49:04.977576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.924 [2024-12-09 05:49:04.977651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.924 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.977940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.978004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.978288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.978353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.978606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.978669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.978920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.978984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.979213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.979293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.979514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.979580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.979864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.979938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.980255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.980562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.980626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.980871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.980935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.981232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.981331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.981587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.981650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.981939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.982002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.982202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.982265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.982579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.982643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.982832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.982896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.983121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.983184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.983390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.983456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.983691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.983754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.984050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.984123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.984431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.984497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.984754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.984818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.985070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.985136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.985429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.985495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.985753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.985816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.986054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.986118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.986416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.986482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.986798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.986862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.987156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.987231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.987504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.987568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.987872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.987935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.988217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.988309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.988609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.988673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.988931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.988994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.925 qpair failed and we were unable to recover it. 00:54:10.925 [2024-12-09 05:49:04.989320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.925 [2024-12-09 05:49:04.989393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.989662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.989725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.989982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.990358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.990747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.990811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.991067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.991130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.991393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.991461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.991773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.991841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.992149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.992213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.992508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.992584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.992756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.993002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.993237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.993280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.993450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.993488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.993698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.993896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.993962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.994227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.994314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.994574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.994640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.994839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.995024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.995057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.995298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.995348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.995502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.995768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.996006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.996066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.996320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.996378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.996673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.996971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.997032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.997322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.997385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.997563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.997625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.997872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.997933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.998224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.998313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.998588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.998641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.998891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.998943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.999195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.999247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.999522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.999575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:04.999756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:04.999807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:05.000000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:05.000052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:05.000267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.926 [2024-12-09 05:49:05.000351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.926 qpair failed and we were unable to recover it. 00:54:10.926 [2024-12-09 05:49:05.000527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.000576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.000827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.001031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.001079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.001299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.001521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.001571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.001775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.001824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.002074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.002308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.002359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.002606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.002656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.002805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.002856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.003031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.003114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.003374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.003426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.003658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.003727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.003976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.004038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.004264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.004571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.004633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.004915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.004985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.005243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.005325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.005563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.005636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.005874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.005945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.006186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.006237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.006500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.006571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.006807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.006878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.007171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.007241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.007509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.007590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.007858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.008132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.008185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.008437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.008509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.008728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.008809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.009040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.009127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.009351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.009427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.009649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.009720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.009972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.010023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.010267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.010328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.010597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.010674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.010902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.010975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.011179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.011233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.011469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.927 [2024-12-09 05:49:05.011539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.927 qpair failed and we were unable to recover it. 00:54:10.927 [2024-12-09 05:49:05.011779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.011830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.012058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.012129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.012353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.012428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.012680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.012733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.012936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.012990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.013205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.013262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.013515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.013566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.013843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.013915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.014115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.014166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.014388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.014463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.014601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.014652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.014823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.014896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.015062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.015114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.015384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.015458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.015679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.015731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.015909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.015963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.016188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.016240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.016485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.016537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.016759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.017019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.017070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.017255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.017321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.017517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.017580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.017766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.017826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.018037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.018088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.018298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.018484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.018520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.018694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.018739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.018877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.018914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.019793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.019959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.020187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.020242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.020410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.020436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.020530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.020554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.928 qpair failed and we were unable to recover it. 00:54:10.928 [2024-12-09 05:49:05.020677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.928 [2024-12-09 05:49:05.020729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.020925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.020976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.021962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.021988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.022938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.022990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.023924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.024041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.024071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.024212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.024263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.024420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.024448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.024561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.024846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.025097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.025149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.025333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.025359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.025495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.025521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.025760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.025831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.026089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.026304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.026424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.026573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.026799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.026965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.027020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.027236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.027441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.027467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.027548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.027616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.929 qpair failed and we were unable to recover it. 00:54:10.929 [2024-12-09 05:49:05.027795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.929 [2024-12-09 05:49:05.027822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.027944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.027970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.028130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.028373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.028478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.028591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.028781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.028958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.029012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.029256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.029331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.029440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.029549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.029612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.029776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.029829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.030909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.030936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.031847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.031909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.032885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.032909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.033083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.033135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.033370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.033507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.033533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.930 [2024-12-09 05:49:05.033629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.930 [2024-12-09 05:49:05.033676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.930 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.033935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.033988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.034239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.034321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.034455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.034482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.034603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.034641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.034724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.034748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.034861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.034912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.035971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.035996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.036140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.036191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.036368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.036396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.036565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.036617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.036802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.036867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.037037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.037090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.037343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.037652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.037703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.037945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.037997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.038231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.038294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.038484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.038559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.038845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.038916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.039129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.039180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.039448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.039519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.039779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.039831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.040034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.040087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.040223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.040296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.040506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.040597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.040874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.040950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.041155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.041217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.041512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.041588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.041876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.041947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.042154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.042206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.042454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.042525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.042786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.042838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.043061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.043112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.931 [2024-12-09 05:49:05.043381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.931 [2024-12-09 05:49:05.043453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.931 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.043705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.043757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.044008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.044060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.044243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.044328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.044534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.044605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.044795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.044846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.045057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.045109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.045316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.045370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.045614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.045685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.045954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.046200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.046253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.046513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.046596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.046868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.046945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.047156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.047495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.047564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.047805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.047876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.048111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.048163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.048458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.048773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.048843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.049043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.049095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.049306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.049360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.049592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.049661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.049861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.049932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.050185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.050238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.050532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.050602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.050879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.050950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.051157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.051209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.051891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.051963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.052207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.052500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.052583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.052742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.052809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.053057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.053127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.053328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.053381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.053623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.053691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.053961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.054033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.054249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.054567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.054637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.054869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.932 [2024-12-09 05:49:05.054940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.932 qpair failed and we were unable to recover it. 00:54:10.932 [2024-12-09 05:49:05.055180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.055231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.055564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.055654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.055943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.056013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.056249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.056319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.056625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.056804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.056875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.057119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.057187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.057438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.057510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.057778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.057848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.058105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.058158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.058419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.058490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.058773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.058844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.059101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.059152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.059424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.059495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.059776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.059848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.060049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.060099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.060371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.060442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.060715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.060787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.061037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.061108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.061366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.061437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.061647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.061699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.061897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.061949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.062134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.062186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.062401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.062455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.062627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.062679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.062850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.063110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.063163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.063442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.063525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.063813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.063885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.064148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.064495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.064574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.064847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.064918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.065116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.065176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.065445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.065517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.065787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.065857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.066092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.066144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.066418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.066489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.933 qpair failed and we were unable to recover it. 00:54:10.933 [2024-12-09 05:49:05.066780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.933 [2024-12-09 05:49:05.066850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.067044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.067095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.067303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.067357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.067589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.067660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.067890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.067963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.068205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.068256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.068548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.068624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.068890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.068942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.069174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.069225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.069499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.069577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.069824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.069894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.070061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.070369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.070601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.070674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.070971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.071042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.071299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.071351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.071594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.071646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.071920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.072171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.072223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.072412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.072484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.072716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.072769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.072964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.073015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.073226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.073288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.073537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.073608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.073886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.073956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.074184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.074487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.074558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.074772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.074845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.075046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.075099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.075316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.075582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.075942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.076022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.076261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.076323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.076510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.076584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.934 [2024-12-09 05:49:05.076805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.934 [2024-12-09 05:49:05.076875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.934 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.077069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.077128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.077407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.077489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.077745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.077816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.078035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.078088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.078348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.078421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.078605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.078680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.078909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.078960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.079256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.079550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.079876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.079948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.080189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.080240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.080527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.080597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.080802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.080873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.081060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.081291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.081343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.081578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.081648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.081996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.082202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.082256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.082551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.082632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.082843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.082895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.083056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.083107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.083306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.083358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.083567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.083918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.084002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.084200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.084254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.084526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.084601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.084837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.084906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.935 [2024-12-09 05:49:05.085124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.935 [2024-12-09 05:49:05.085175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.935 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.085389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.085459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.085680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.085751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.086055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.086302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.086540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.086611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.086854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.086923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.087140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.087191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.087424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.087495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.087855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.088059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.088113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.088341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.088415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.088605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.088673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.088910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.088989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.089201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.089252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.089502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.089572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.089899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.090437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.090508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.090690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.090763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.091056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.091133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.091292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.091344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.091586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.091656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.091836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.091915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.092091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.092142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.092374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.092445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.092647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.092700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.092957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.093027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.093262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.093324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.093582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.093652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.093931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.094001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.094240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.094302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.094484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.094556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.094826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.094896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.095099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.095150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.095330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.936 [2024-12-09 05:49:05.095385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.936 qpair failed and we were unable to recover it. 00:54:10.936 [2024-12-09 05:49:05.095633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.095686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.095944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.095995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.096239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.096303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.096532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.096604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.096843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.097096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.097147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.097424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.097496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.097667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.097742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.097975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.098045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.098288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.098341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.098561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.098638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.098887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.098939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.099132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.099183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.099380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.099450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.099737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.099815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.099967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.100018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.100258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.100321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.100564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.100642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.100935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.101015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.101190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.101243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.101529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.101622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.101896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.101948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.102182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.102233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.102478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.102549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.102793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.102846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.103094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.103146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.103487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.103674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.103744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.103964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.104035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.104284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.104704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.104937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.105008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.105309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.105542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.105611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.105858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.937 [2024-12-09 05:49:05.105929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.937 qpair failed and we were unable to recover it. 00:54:10.937 [2024-12-09 05:49:05.106088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.106141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.106398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.106470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.106719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.106772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.107025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.107094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.107356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.107427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.107650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.107721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.107996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.108068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.108221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.108523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.108607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.108857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.108928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.109130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.109184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.109426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.109496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.109713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.109782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.110016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.110210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.110262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.110544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.110614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.110848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.110918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.111130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.111181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.111460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.111513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.111788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.111869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.112058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.112109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.112359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.112434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.112629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.112693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.112855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.112907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.113103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.113157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.113335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.113389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.113593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.113646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.113818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.113872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.114087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.114138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.114382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.114435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.114715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.114796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.114960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.115011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.115162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.115214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.115461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.115533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.938 [2024-12-09 05:49:05.115754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.938 [2024-12-09 05:49:05.115825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.938 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.116063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.116115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.116293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.116346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.116590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.116927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.116979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.117191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.117241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.117536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.117619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.117868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.117938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.118093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.118144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.118374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.118451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.118730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.118799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.119047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.119098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.119302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.119354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.119597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.119667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.119872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.119943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.120160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.120212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.120461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.120532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.120825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.120904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.121103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.121153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.121429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.121499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.121731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.121801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.122059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:10.939 qpair failed and we were unable to recover it. 00:54:10.939 [2024-12-09 05:49:05.122300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:10.939 [2024-12-09 05:49:05.122352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.214 qpair failed and we were unable to recover it. 00:54:11.214 [2024-12-09 05:49:05.122636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.214 [2024-12-09 05:49:05.122717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.214 qpair failed and we were unable to recover it. 00:54:11.214 [2024-12-09 05:49:05.122998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.214 [2024-12-09 05:49:05.123068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.214 qpair failed and we were unable to recover it. 00:54:11.214 [2024-12-09 05:49:05.123299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.214 [2024-12-09 05:49:05.123351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.214 qpair failed and we were unable to recover it. 00:54:11.214 [2024-12-09 05:49:05.123628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.123698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.123941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.124012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.124207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.124266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.124482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.124555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.124796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.124868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.125049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.125120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.125316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.125369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.125589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.125858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.125929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.126163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.126214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.126446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.126499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.126791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.126861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.127066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.127117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.127356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.127433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.127690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.127759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.127984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.128254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.128585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.128647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.128880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.128951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.129157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.129209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.129500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.129581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.129804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.129874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.130035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.130087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.130349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.130427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.130635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.130710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.130887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.130959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.131187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.131239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.131434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.131486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.131958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.132009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.132172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.132226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.132436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.132488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.132650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.132702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.132891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.132943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.215 qpair failed and we were unable to recover it. 00:54:11.215 [2024-12-09 05:49:05.133183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.215 [2024-12-09 05:49:05.133234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.133463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.133515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.133683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.133735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.133881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.133932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.134118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.134170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.134445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.134518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.134753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.134823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.134996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.135047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.135251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.135328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.135561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.135631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.135855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.135927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.136126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.136180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.136472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.136545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.136824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.136894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.137139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.137190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.137389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.137466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.137745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.137816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.138040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.138111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.138343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.138420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.138699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.138777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.139069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.139138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.139409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.139489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.139704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.139779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.140020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.140071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.140432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.140687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.140739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.141017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.141098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.141368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.141441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.141723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.141803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.141967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.142018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.142218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.142291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.142494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.142577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.142853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.142923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.143164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.143215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.143468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.143543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.143835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.143914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.144131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.144189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.144402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.216 [2024-12-09 05:49:05.144480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.216 qpair failed and we were unable to recover it. 00:54:11.216 [2024-12-09 05:49:05.144706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.144777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.144982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.145033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.145280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.145333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.145625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.145702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.145994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.146063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.146315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.146367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.146583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.146663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.146840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.146910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.147108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.147160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.147523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.147753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.147833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.148032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.148266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.148344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.148518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.148596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.148819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.148888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.149134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.149186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.149471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.149542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.149832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.149901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.150101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.150155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.150401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.150476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.150771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.150841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.151039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.151090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.151299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.151354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.151596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.151668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.151896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.151968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.152126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.152177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.152408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.152480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.152699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.152771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.153016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.153067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.153305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.153358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.153594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.153669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.153804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.153855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.154015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.154066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.154254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.154318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.154534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.154605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.154842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.155174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.155228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.217 qpair failed and we were unable to recover it. 00:54:11.217 [2024-12-09 05:49:05.155334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe24f30 (9): Bad file descriptor 00:54:11.217 [2024-12-09 05:49:05.155756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.217 [2024-12-09 05:49:05.155867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.156144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.156213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.156492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.156546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.156898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.156964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.157259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.157354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.157569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.157622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.157882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.157947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.158247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.158349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.158538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.158589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.158871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.158936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.159193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.159259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.159522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.159596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.159870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.159935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.160203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.160293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.160566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.160644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.160929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.160993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.161241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.161337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.161605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.161669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.161968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.162043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.162361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.162414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.162658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.162724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.163071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.163339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.163393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.163617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.163669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.163967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.164354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.164406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.164649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.164711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.164935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.165009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.165227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.165290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.165495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.165547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.165872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.165947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.166160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.166223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.166449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.166501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.166671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.166723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.166996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.167060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.167330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.167385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.167653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.167718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.168018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.218 [2024-12-09 05:49:05.168082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.218 qpair failed and we were unable to recover it. 00:54:11.218 [2024-12-09 05:49:05.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.168407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.168668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.168734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.169037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.169101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.169345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.169413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.169629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.169693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.169941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.170006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.170304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.170370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.170664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.170740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.170955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.171021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.171315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.171393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.171643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.171710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.172002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.172067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.172317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.172644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.172709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.172993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.173058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.173449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.173760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.174069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.174134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.174351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.174657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.174720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.174938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.175002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.175303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.175379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.175595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.175660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.175952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.176015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.176306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.176372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.176564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.176627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.176906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.176969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.177155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.177219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.177519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.177586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.177886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.177950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.178201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.178267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.178490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.178812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.219 [2024-12-09 05:49:05.178876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.219 qpair failed and we were unable to recover it. 00:54:11.219 [2024-12-09 05:49:05.179107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.179170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.179413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.179481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.179712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.179776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.179987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.180050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.180307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.180373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.180572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.180635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.180887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.180950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.181199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.181265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.181551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.181616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.181923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.182003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.182304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.182370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.182622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.182686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.182894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.182957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.183257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.183572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.183636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.183930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.184003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.184253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.184337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.184571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.184855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.184918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.185134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.185201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.185457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.185521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.185807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.185871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.186158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.186221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.186539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.186613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.186898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.186961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.187207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.187300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.187596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.187660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.187877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.187943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.188183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.188248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.188561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.188626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.188931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.188995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.189304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.189368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.189633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.189989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.190053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.190321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.190386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.190672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.190736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.190988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.191061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.191315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.191380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.220 [2024-12-09 05:49:05.191631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.220 [2024-12-09 05:49:05.191696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.220 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.191989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.192052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.192338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.192652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.192718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.193009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.193072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.193364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.193429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.193720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.193782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.194027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.194091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.194357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.194423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.194631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.194694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.194883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.194947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.195234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.195313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.195558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.195623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.195865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.195928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.196220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.196295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.196550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.196614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.196834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.196899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.197114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.197179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.197454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.197519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.197719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.197783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.198030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.198093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.198318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.198383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.198658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.198901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.198967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.199226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.199313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.199533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.199597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.199888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.199952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.200149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.200211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.200465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.200529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.200830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.200904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.201151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.201213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.201458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.201521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.201738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.201802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.202013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.202078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.202325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.202392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.202695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.202769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.203091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.203379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.203444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.203626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.221 [2024-12-09 05:49:05.203690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.221 qpair failed and we were unable to recover it. 00:54:11.221 [2024-12-09 05:49:05.203937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.204011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.204259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.204336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.204648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.205002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.205298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.205613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.205674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.205929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.205992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.206245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.206326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.206617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.206679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.206923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.206988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.207304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.207374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.207662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.207907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.207969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.208251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.208336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.208649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.208722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.208966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.209029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.209294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.209358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.209643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.209708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.210005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.210068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.210251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.210333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.210686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.210893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.210957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.211192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.211256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.211520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.211584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.211872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.211934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.212262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.212481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.212544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.212826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.212890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.213118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.213181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.213395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.213460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.213732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.213794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.214026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.214089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.214391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.214466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.214716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.214781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.215065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.215128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.215429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.215494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.215740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.215805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.216097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.216161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.222 [2024-12-09 05:49:05.216459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.222 [2024-12-09 05:49:05.216524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.222 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.216712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.216777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.217023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.217086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.217357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.217597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.217660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.217914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.217978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.218211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.218301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.218553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.218617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.218915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.219240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.219332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.219624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.219686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.219923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.219988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.220227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.220311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.220569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.220632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.220893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.220957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.221255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.221343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.221632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.222008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.222378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.222443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.222694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.222756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.223051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.223114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.223364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.223428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.223675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.223740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.224090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.224411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.224676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.224739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.224996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.225059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.225292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.225356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.225641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.225704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.225993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.226055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.226350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.226432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.226686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.226750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.227049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.227121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.227369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.227434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.227673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.227737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.228041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.228112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.228410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.228475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.228768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.228832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.229086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.223 [2024-12-09 05:49:05.229148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.223 qpair failed and we were unable to recover it. 00:54:11.223 [2024-12-09 05:49:05.229394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.229458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.229757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.229831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.230089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.230152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.230441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.230506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.230797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.230861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.231123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.231187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.231441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.231757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.231821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.232106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.232171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.232387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.232451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.232714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.232776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.233023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.233087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.233359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.233573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.233887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.233951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.234253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.234336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.234632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.234695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.234980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.235044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.235316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.235381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.235682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.235744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.236048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.236111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.236463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.236757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.236820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.237057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.237120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.237364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.237429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.237680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.237743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.238025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.238087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.238332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.238399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.238650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.238935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.238998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.239303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.239378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.239733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.240041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.240116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.240414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.240479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.224 qpair failed and we were unable to recover it. 00:54:11.224 [2024-12-09 05:49:05.240674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.224 [2024-12-09 05:49:05.240738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.241028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.241091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.241345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.241410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.241660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.241723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.241932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.241996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.242263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.242544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.242607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.242814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.242878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.243138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.243201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.243474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.243538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.243778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.243841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.244095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.244158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.244423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.244488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.244776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.245087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.245149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.245348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.245625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.245942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.246005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.246301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.246375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.246615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.246679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.246932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.246994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.247246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.247361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.247636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.247699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.247948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.248014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.248223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.248302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.248565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.248638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.248889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.248953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.249171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.249234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.249482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.249798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.249861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.250147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.250209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.250584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.250834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.250900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.251201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.251292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.251546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.251610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.251854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.251917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.252170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.252233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.252505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.252569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.252858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.225 [2024-12-09 05:49:05.252922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.225 qpair failed and we were unable to recover it. 00:54:11.225 [2024-12-09 05:49:05.253153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.253216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.253483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.253550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.253807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.253870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.254113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.254178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.254451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.254517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.254757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.255116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.255190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.255499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.255863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.255926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.256167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.256232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.256577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.256839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.256902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.257198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.257262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.257570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.257634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.257899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.257962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.258216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.258296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.258556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.258619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.258866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.258929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.259210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.259290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.259594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.259658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.259954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.260018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.260319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.260385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.260669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.260733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.261022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.261085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.261379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.261454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.261748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.261812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.262100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.262163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.262431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.262506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.262735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.262799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.263025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.263087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.263318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.263383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.263642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.263707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.264003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.264066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.264353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.264418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.264704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.264767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.265050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.265113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.265413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.265486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.265728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.265792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.226 qpair failed and we were unable to recover it. 00:54:11.226 [2024-12-09 05:49:05.266004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.226 [2024-12-09 05:49:05.266067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.266313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.266377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.266623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.266687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.266991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.267066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.267334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.267402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.267649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.267714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.267919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.267982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.268263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.268356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.268606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.268672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.269003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.269255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.269336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.269628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.269691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.270000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.270063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.270357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.270422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.270667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.270731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.270943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.271006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.271238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.271328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.271630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.271696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.271985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.272049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.272352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.272426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.272685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.272749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.272935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.272997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.273363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.273649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.273715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.274009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.274072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.274359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.274425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.274688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.274752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.275015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.275081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.275371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.275436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.275681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.275744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.276046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.276120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.276421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.276674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.276737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.276968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.277030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.277328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.277392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.277603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.277667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.277973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.278046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.278307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.278375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.227 [2024-12-09 05:49:05.278707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.227 qpair failed and we were unable to recover it. 00:54:11.227 [2024-12-09 05:49:05.278961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.279024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.279322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.279398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.279756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.280048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.280110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.280405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.280470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.280705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.280771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.281030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.281096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.281310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.281375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.281597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.281662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.281898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.281963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.282216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.282298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.282584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.282896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.282959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.283170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.283234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.283537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.283601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.283834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.283897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.284148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.284478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.284546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.284844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.284923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.285219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.285593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.285657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.285962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.286026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.286266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.286348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.286623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.286911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.286975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.287205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.287268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.287541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.287605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.287887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.287950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.288287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.288364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.288665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.288731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.288983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.289047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.289351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.289415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.289690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.290045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.290108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.228 qpair failed and we were unable to recover it. 00:54:11.228 [2024-12-09 05:49:05.290365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.228 [2024-12-09 05:49:05.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.290659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.290722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.290907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.290972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.291202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.291265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.291582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.291644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.291892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.291955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.292202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.292269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.292555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.292618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.292920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.292994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.293242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.293328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.293632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.293705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.293961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.294034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.294300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.294365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.294593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.294656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.294956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.295031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.295244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.295337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.295535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.295599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.295845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.295910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.296214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.296296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.296563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.296626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.296916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.296979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.297290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.297356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.297629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.297886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.297950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.298244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.298324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.298589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.298922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.298986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.299239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.299323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.299575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.299638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.299880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.299943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.300160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.300535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.300598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.300838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.300901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.301156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.301219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.301459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.301523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.301713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.301776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.302063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.302128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.302417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.302697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.229 [2024-12-09 05:49:05.302763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.229 qpair failed and we were unable to recover it. 00:54:11.229 [2024-12-09 05:49:05.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.303075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.303321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.303386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.303617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.303681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.303983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.304047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.304310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.304374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.304660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.304724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.305022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.305086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.305381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.305445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.305691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.305755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.306039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.306104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.306390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.306454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.306666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.306729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.307020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.307086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.307390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.307592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.307655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.307907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.307970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.308259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.308335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.308605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.308832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.309146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.309209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.309546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.309791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.309854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.310044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.310107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.310361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.310427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.310710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.310773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.311027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.311090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.311396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.311712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.312037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.312304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.312370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.312568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.312632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.312916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.313212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.313291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.313538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.313601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.313894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.313958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.314247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.314337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.314550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.314613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.314797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.314860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.230 [2024-12-09 05:49:05.315045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.230 [2024-12-09 05:49:05.315111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.230 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.315462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.315704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.315777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.316038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.316101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.316358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.316713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.316777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.317000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.317063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.317319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.317387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.317608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.317671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.317854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.317917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.318163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.318225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.318541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.318606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.318851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.318917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.319206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.319303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.319565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.319629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.319867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.319940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.320172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.320235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.320547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.320610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.320830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.320893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.321146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.321209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.321481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.321545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.321805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.321867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.322061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.322124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.322410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.322476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.322726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.322789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.323081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.323144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.323401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.323468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.323755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.323818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.324030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.324378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.324443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.324707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.324770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.325012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.325078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.325334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.325401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.325652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.325715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.326011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.326075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.326321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.326389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.326606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.326670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.326968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.327033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.327328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.231 [2024-12-09 05:49:05.327393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.231 qpair failed and we were unable to recover it. 00:54:11.231 [2024-12-09 05:49:05.327687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.327751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.327934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.327997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.328245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.328331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.328525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.328591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.328793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.328866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.329113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.329184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.329504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.329568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.329864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.329937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.330177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.330244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.330510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.330585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.330815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.330879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.331117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.331192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.331493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.331558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.331810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.331873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.332066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.332130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.332326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.332391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.332618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.332696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.332950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.333024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.333228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.333332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.333508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.333542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.333660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.333841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.333875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.334826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.334859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.335865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.335899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.336134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.336439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.336478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.336641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.336724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.336966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.337020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.232 [2024-12-09 05:49:05.337215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.232 [2024-12-09 05:49:05.337292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.232 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.337482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.337522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.337662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.337697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.337889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.337928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.338121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.338174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.338350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.338497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.338532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.338712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.338763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.338947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.339159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.339198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.339383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.339565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.339600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.339775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.339833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.339975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.340170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.340365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.340528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.340743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.340959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.340995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.341139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.341174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.341289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.341336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.341529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.341671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.341706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.341874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.341911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.342935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.343117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.343153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.343353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.343489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.233 [2024-12-09 05:49:05.343525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.233 qpair failed and we were unable to recover it. 00:54:11.233 [2024-12-09 05:49:05.343670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.343707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.343857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.343893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.344015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.344056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.344213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.344250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.344382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.344423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.344526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.344778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.344814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.345057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.345132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.345365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.345402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.345517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.345553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.345724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.345761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.345885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.345921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.346094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.346131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.346282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.346335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.346486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.346522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.346639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.346676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.346844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.347956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.347991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.348929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.348964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.349148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.349347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.349384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.349492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.349527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.349693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.349729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.349882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.349919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.350121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.350243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.350303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.350405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.350446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.350575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.234 [2024-12-09 05:49:05.350614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.234 qpair failed and we were unable to recover it. 00:54:11.234 [2024-12-09 05:49:05.350756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.350932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.350975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.351124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.351164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.351358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.351510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.351551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.351708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.351743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.351861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.352231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.352353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.352393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.352503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.352647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.352682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.352827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.352863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.353886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.353923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.354086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.354134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.354304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.354339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.354459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.354679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.354713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.354900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.354960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.355070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.355123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.355339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.355492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.355528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.355663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.355704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.355890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.356859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.356893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.357100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.357170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.357405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.357552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.235 [2024-12-09 05:49:05.357701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.235 [2024-12-09 05:49:05.357736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.235 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.357904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.357969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.358212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.358247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.358413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.358447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.358557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.358603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.358742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.358777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.359016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.359082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.359322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.359358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.359470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.359504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.359684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.359961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.360211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.360446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.360601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.360777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.360962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.360998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.361227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.361431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.361465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.361615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.361650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.361834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.361901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.362155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.362222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.362438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.362473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.362599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.362634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.362952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.363206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.363339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.363446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.363481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.363588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.363623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.363742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.363779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.363998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.364076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.364357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.364393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.364508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.364542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.364686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.364987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.365063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.365361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.365397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.365520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.365555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.365703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.365738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.365878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.365913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.366161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.236 [2024-12-09 05:49:05.366355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.236 [2024-12-09 05:49:05.366391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.236 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.366511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.366545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.366717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.366753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.366889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.366925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.367075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.367135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.367336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.367372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.367521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.367555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.367700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.367735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.367951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.368005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.368269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.368313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.368455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.368634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.368670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.368817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.368851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.369033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.369099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.369354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.369538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.369580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.369724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.369870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.369906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.370876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.370911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.371870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.371905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.372246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.372440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.372618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.237 [2024-12-09 05:49:05.372775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.237 qpair failed and we were unable to recover it. 00:54:11.237 [2024-12-09 05:49:05.372953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.372987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.373936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.373970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.374918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.374953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.375936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.375970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.376951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.376986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.377953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.377987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.238 [2024-12-09 05:49:05.379106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.238 [2024-12-09 05:49:05.379141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.238 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.379293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.379329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.379445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.379485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.379623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.379784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.379819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.379928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.379962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.380871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.380905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.381082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.381291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.381444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.381608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.381802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.381976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.382947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.382980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.383354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.383509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.383679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.383814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.383991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.384212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.384371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.384514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.384671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.384843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.384877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.385074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.385145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.385382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.385416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.385534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.385567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.385734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.385768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.385911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.385944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.386067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.239 [2024-12-09 05:49:05.386102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.239 qpair failed and we were unable to recover it. 00:54:11.239 [2024-12-09 05:49:05.386301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.386336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.386487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.386609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.386642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.386810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.387091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.387161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.387388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.387423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.387548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.387591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.387735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.387769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.387914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.387948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.388945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.388980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.389920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.389954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.390900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.390934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.391872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.391905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.392862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.393078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.393112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.393245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.240 [2024-12-09 05:49:05.393391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.240 [2024-12-09 05:49:05.393425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.240 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.393595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.393629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.393760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.393794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.393964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.393997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.394170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.394203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.394382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.394548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.394588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.394719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.394752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.394927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.394961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.395917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.395958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.396128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.396331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.396366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.396532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.396711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.396745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.396922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.397887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.397919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.398105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.398138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.398351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.398388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.398500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.398533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.398697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.398742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.398962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.398995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.399171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.399363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.399506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.399668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.399825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.399973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.400006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.400091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.400124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.400314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.400362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.400534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.400598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.400755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.400804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.400989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.401039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.241 qpair failed and we were unable to recover it. 00:54:11.241 [2024-12-09 05:49:05.401165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.241 [2024-12-09 05:49:05.401204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.401373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.401424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.401556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.401612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.401769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.401819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.401928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.401960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.402907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.402939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.403048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.403083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.403223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.403256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.403413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.403462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.403647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.403696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.403828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.403878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.404847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.404880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.405885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.405929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.406099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.406133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.406342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.406393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.406515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.406549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.406711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.406761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.406947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.406995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.407166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.407346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.407493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.407654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.407816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.407980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.408013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.242 [2024-12-09 05:49:05.408142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.242 [2024-12-09 05:49:05.408176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.242 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.408289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.408323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.408453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.408624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.408784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.408817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.408942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.408974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.409909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.409945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.410088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.410293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.410433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.410648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.410854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.410986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.411193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.411379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.411535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.411739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.411885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.411920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.412073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.412125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.412249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.412295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.412431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.412483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.412624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.412658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.412804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.412836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.413899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.413933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.414844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.414877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.415064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.415259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.415414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.415583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.243 [2024-12-09 05:49:05.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.243 qpair failed and we were unable to recover it. 00:54:11.243 [2024-12-09 05:49:05.415867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.415900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.416840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.416870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.417948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.417983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.418899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.418929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.419836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.419995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.420044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.420134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.420165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.420457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.420492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.420700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.420732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.420894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.420942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.421107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.421137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.421243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.421296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.421446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.421592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.421625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.244 [2024-12-09 05:49:05.421761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.244 [2024-12-09 05:49:05.421795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.244 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.421912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.421955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.422098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.422421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.422599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.422829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.422982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.423894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.424027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.424158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.424189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.424330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.424361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.424447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.527 [2024-12-09 05:49:05.424476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.527 qpair failed and we were unable to recover it. 00:54:11.527 [2024-12-09 05:49:05.424576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.424611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.424711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.424760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.424868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.424901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.425958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.425994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.426872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.426906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.427797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.427830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.428874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.428907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.429075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.429115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.429248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.429301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.429418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.429448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.429571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.528 [2024-12-09 05:49:05.429608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.528 qpair failed and we were unable to recover it. 00:54:11.528 [2024-12-09 05:49:05.429760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.429793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.429907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.429954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.430097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.430131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.430287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.430334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.430490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.430519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.430650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.430679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.430832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.430860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.431083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.431260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.431428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.431587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.431804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.431963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.432168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.432309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.432478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.432708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.432911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.432960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.433043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.433073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.433236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.433266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.433411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.433462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.433617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.433678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.433894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.433928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.434884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.434980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.435911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.435940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.529 qpair failed and we were unable to recover it. 00:54:11.529 [2024-12-09 05:49:05.436099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.529 [2024-12-09 05:49:05.436128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.436282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.436313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.436414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.436444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.436569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.436746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.436775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.436870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.436899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.437935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.437965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.438842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.438991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.439868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.439897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.440033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.440070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.440238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.530 [2024-12-09 05:49:05.440355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.530 [2024-12-09 05:49:05.440386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.530 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.440487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.440516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.440619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.440648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.440800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.440829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.440915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.440943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.441864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.441989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.442037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.442189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.442221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.442378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.442423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.442567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.442616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.442766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.442813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.442963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.443164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.443328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.443467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.443686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.443930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.443979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.444137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.444258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.444599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.444799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.444977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.445949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.445980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.446135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.446165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.446286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.446331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.446425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.446453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.531 [2024-12-09 05:49:05.446550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.531 [2024-12-09 05:49:05.446588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.531 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.446700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.446728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.446891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.446920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.447048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.447077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.447238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.447283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.447401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.447430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.448175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.448376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.448405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.448497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.448528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.448688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.448717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.448846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.449916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.449945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.450093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.450235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.450435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.450599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.450973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.451933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.451960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.452063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.452093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.452184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.452212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.532 [2024-12-09 05:49:05.452325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.532 [2024-12-09 05:49:05.452354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.532 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.452444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.452473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.452578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.452606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.452763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.452791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.452936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.452987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.453939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.453988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.454940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.454968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.455942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.456898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.456927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.457860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.458012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.533 [2024-12-09 05:49:05.458101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.533 [2024-12-09 05:49:05.458132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.533 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.458278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.458406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.458525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.458697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.458836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.458975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.459893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.459922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.460903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.460932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.461843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.461985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.462924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.462970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.463124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.463157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.463303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.463331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.463434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.463462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.463545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.463600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.463800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.534 [2024-12-09 05:49:05.463847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.534 qpair failed and we were unable to recover it. 00:54:11.534 [2024-12-09 05:49:05.464005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.464919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.464999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.465193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.465426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.465538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.465713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.465856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.465900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.466888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.466978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.467923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.467956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.468871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.468901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.469764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.469996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.535 [2024-12-09 05:49:05.470029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.535 qpair failed and we were unable to recover it. 00:54:11.535 [2024-12-09 05:49:05.470155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.470375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.470499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.470622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.470751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.470892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.470925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.471882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.471917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.472868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.472916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.473877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.473984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.474907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.474935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.475815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.475881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.536 qpair failed and we were unable to recover it. 00:54:11.536 [2024-12-09 05:49:05.476034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.536 [2024-12-09 05:49:05.476082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.476228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.476391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.476505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.476684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.476866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.476983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.477857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.477977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.478966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.478996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.479082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.479116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.479229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.479257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.479387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.479415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.479572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.479606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.479796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.479840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.480919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.481038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.481068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.481187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.481215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.481316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.481345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.537 qpair failed and we were unable to recover it. 00:54:11.537 [2024-12-09 05:49:05.481498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.537 [2024-12-09 05:49:05.481526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.481654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.481683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.481839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.481867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.482874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.482993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.483959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.483986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.484923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.484953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.485952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.538 [2024-12-09 05:49:05.486936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.538 [2024-12-09 05:49:05.486963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.538 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.487943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.487971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.488862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.489820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.489993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.490130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.490297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.490443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.490583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.490731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.490780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.491858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.491887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.492887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.492934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.493105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.493156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.493267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.539 [2024-12-09 05:49:05.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.539 qpair failed and we were unable to recover it. 00:54:11.539 [2024-12-09 05:49:05.493392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.493421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.493564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.493593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.493809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.493842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.494082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.494116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.494283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.494455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.494484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.494610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.494657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.494851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.494916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.495883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.495914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.496117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.496352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.496494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.496670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.496838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.496977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.497889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.497916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.498850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.498997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.499223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.499406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.499734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.499903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.499952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.540 qpair failed and we were unable to recover it. 00:54:11.540 [2024-12-09 05:49:05.500088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.540 [2024-12-09 05:49:05.500136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.500289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.500317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.500410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.500438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.500535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.500576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.500725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.500770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.500911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.500945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.501908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.501955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.502114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.502349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.502378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.502497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.502525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.502614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.502643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.502855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.502888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.503093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.503126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.503319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.503362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.503477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.503681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.503712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.503822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.504930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.504965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.505827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.505855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.541 [2024-12-09 05:49:05.506000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.541 [2024-12-09 05:49:05.506028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.541 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.506160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.506433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.506555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.506794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.506972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.507924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.507965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.508862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.508911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.509866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.509895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.510889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.510955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.511164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.511329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.511477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.511669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.511829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.511954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.512008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.512192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.542 [2024-12-09 05:49:05.512220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.542 qpair failed and we were unable to recover it. 00:54:11.542 [2024-12-09 05:49:05.512346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.512376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.512496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.512524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.512730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.512789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.512891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.512924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.513954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.513987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.514912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.514958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.515133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.515165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.515325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.515369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.515488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.515531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.515711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.515749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.515891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.515926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.516253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.516463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.516621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.516732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.516846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.516970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.517970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.518071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.518115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.518219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.518252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.543 qpair failed and we were unable to recover it. 00:54:11.543 [2024-12-09 05:49:05.518347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.543 [2024-12-09 05:49:05.518375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.518491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.518519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.518631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.518660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.518742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.518775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.518923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.518951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.519826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.519975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.520159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.520468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.520623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.520839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.520886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.521897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.521926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.522048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.522077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.522215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.522259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.522490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.522519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.522735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.522801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.523914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.523947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.524105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.524155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.524300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.544 [2024-12-09 05:49:05.524330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.544 qpair failed and we were unable to recover it. 00:54:11.544 [2024-12-09 05:49:05.524433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.524476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.524630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.524660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.524793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.524840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.526180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.526473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.526653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.526972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.527185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.527393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.527518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.527698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.527900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.527954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.528880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.528908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.529933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.529969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.530116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.530145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.530241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.530270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.530425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.530453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.530580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.545 [2024-12-09 05:49:05.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.545 qpair failed and we were unable to recover it. 00:54:11.545 [2024-12-09 05:49:05.530729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.530756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.530856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.531881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.532850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.532998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.533928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.533955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.534949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.534977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.535161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.535312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.535473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.535625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.546 [2024-12-09 05:49:05.535765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.546 qpair failed and we were unable to recover it. 00:54:11.546 [2024-12-09 05:49:05.535885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.535913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.536849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.536878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.537856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.537892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.538080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.538114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.538301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.538462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.538504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.538627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.538678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.538862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.538912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.539859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.539887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.540889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.540923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.541106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.541142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.541331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.541361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.541462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.541491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.541639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.541780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.541851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.542173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.542238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.542402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.542431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.547 qpair failed and we were unable to recover it. 00:54:11.547 [2024-12-09 05:49:05.542531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.547 [2024-12-09 05:49:05.542561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.542659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.542925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.542990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.543294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.543356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.543484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.543512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.543637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.543666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.543819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.543871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.544922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.544950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.545879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.545914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.546910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.546955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.547926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.547964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.548133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.548165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.548329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.548360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.548493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.548522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.548904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.548948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.549085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.548 [2024-12-09 05:49:05.549133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.548 qpair failed and we were unable to recover it. 00:54:11.548 [2024-12-09 05:49:05.549258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.549296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.549420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.549449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.549537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.549588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.549743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.549776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.549896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.549927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.550858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.550888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.551855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.551884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.552840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.552885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.553845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.553892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.554072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.554104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.554282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.554330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.554435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.554477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.554645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.554677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.554909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.554965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.555096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.555125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.555247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.555284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.555436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.549 [2024-12-09 05:49:05.555464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.549 qpair failed and we were unable to recover it. 00:54:11.549 [2024-12-09 05:49:05.555668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.555702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.555998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.556065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.556283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.556313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.556439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.556468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.556709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.556768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.556957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.556986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.557169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.557197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.557333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.557362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.557517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.557571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.557704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.557737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.557947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.558014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.558240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.558288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.558432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.558466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.558634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.558667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.558784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.558818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.559893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.559940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.560070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.560115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.560218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.560446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.560496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.560631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.560680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.560820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.560849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.561890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.561969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.562112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.562140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.562283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.562312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.562413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.562550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.562586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.550 qpair failed and we were unable to recover it. 00:54:11.550 [2024-12-09 05:49:05.562755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.550 [2024-12-09 05:49:05.562788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.562932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.562969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.563105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.563137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.563259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.563415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.563448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.563656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.563689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.563856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.563894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.564109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.564465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.564666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.564812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.564981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.565925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.565973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.566137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.566167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.566287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.566317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.566503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.566537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.566745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.566818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.567091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.567422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.567795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.567968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.568921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.568953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.569098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.551 [2024-12-09 05:49:05.569126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.551 qpair failed and we were unable to recover it. 00:54:11.551 [2024-12-09 05:49:05.569209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.569238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.569370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.569399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.569581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.569615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.569812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.569876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.570045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.570078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.570238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.570296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.570429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.570459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.570602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.570668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.570850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.570895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.571038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.571172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.571475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.571728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.571972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.572871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.572978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.573148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.573329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.573513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.573677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.573920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.573955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.574132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.574248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.574388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.574631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.574971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.575178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.575313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.575436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.575619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.552 [2024-12-09 05:49:05.575771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.552 [2024-12-09 05:49:05.575803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.552 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.575943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.575976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.576123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.576151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.576242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.576280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.576397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.576664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.576713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.576822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.576856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.577036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.577071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.577310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.577342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.577464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.577498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.577678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.577734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.577916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.577970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.578098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.578126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.578209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.578237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.578409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.578443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.578627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.578671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.578817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.578866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.579881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.579917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.580117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.580258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.580418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.580530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.580782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.580962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.581024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.581186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.581214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.581333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.581362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.581527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.581561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.581805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.581854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.582074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.582249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.582444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.582683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.553 [2024-12-09 05:49:05.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.553 qpair failed and we were unable to recover it. 00:54:11.553 [2024-12-09 05:49:05.582998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.583057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.583168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.583196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.583319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.583348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.583489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.583518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.583675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.583774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.583944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.584036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.584253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.584291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.584435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.584469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.584684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.584730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.584823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.584856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.584984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.585018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.585172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.585204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.585332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.585381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.585526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.585576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.585724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.585790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.586063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.586218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.586367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.586494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.586983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.587048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.587234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.587269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.587463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.587496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.587688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.587751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.587991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.588029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.588172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.588201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.588390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.588424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.588601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.588801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.588835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.588981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.589166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.589351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.589512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.589689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.589891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.589925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.590204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.590318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.590352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.554 [2024-12-09 05:49:05.590477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.554 [2024-12-09 05:49:05.590521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.554 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.590655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.590689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.590827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.590872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.591116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.591242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.591288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.591482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.591513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.591701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.591734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.591924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.591985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.592122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.592258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.592474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.592677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.592860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.592998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.593032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.593198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.593229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.593387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.593417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.593553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.593600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.593745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.593806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.594923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.594956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.595859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.595892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.596071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.596299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.596330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.596502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.596539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.596820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.555 [2024-12-09 05:49:05.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.555 qpair failed and we were unable to recover it. 00:54:11.555 [2024-12-09 05:49:05.597142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.597200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.597357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.597498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.597556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.597698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.597757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.597978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.598157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.598318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.598434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.598607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.598838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.598965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.599174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.599385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.599548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.599731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.599895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.599929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.600057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.600100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.600216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.600246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.600421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.600470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.600597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.600631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.600807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.600859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.601120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.601237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.601279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.601388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.601453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.601648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.601693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.601951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.602177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.602326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.602493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.602703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.602895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.602944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.603074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.603103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.603259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.603295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.603440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.603517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.603730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.603800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.604952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.556 [2024-12-09 05:49:05.604982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.556 qpair failed and we were unable to recover it. 00:54:11.556 [2024-12-09 05:49:05.605069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.605199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.605501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.605631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.605834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.605897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.606158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.606202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.606362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.606392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.606496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.606527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.606748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.606809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.606927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.606983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.607153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.607186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.607314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.607343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.607469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.607615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.607660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.607799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.607832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.608846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.608875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.609106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.609279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.609424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.609611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.609760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.609992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.610962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.610990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.611912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.611941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.612075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.557 [2024-12-09 05:49:05.612104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.557 qpair failed and we were unable to recover it. 00:54:11.557 [2024-12-09 05:49:05.612219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.612248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.612401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.612435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.612608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.612698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.612796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.612825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.612923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.612952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.613107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.613135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.613254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.613433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.613479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.613659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.613707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.613850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.613915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.614038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.614068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.614167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.614210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.614359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.614394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.614539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.614572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.614839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.615122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.615312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.615459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.615643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.615851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.615968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.616161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.616380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.616686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.616877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.616943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.617868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.617972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.618861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.619027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.619060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.558 [2024-12-09 05:49:05.619192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.558 [2024-12-09 05:49:05.619220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.558 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.619340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.619374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.619495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.619524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.619706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.619741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.620881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.620929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.621833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.621862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.622929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.622957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.623135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.623169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.623296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.623345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.623436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.623465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.623600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.623643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.623836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.623890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.624115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.624166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.624321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.624492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.624538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.624682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.624730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.624874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.624930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.625070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.625100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.625247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.625300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.625427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.625632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.625681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.625857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.625891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.626058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.626119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.626234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.626263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.559 [2024-12-09 05:49:05.626396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.559 [2024-12-09 05:49:05.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.559 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.626575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.626643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.626855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.626884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.627870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.627989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.628926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.628958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.629050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.629232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.629280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.629433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.629461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.629637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.629781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.629810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.630003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.630077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.630283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.630314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.630466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.630495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.630641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.630699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.630879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.630966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.631172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.631206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.631363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.631392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.631508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.631536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.631721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.631797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.632054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.632129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.632322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.632353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.632480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.632509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.632666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.632699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.632877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.632911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.633952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.634105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.634138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.634288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.634317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.634429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.634457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.634556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.560 [2024-12-09 05:49:05.634584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.560 qpair failed and we were unable to recover it. 00:54:11.560 [2024-12-09 05:49:05.634717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.634990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.635202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.635386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.635566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.635687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.635855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.635920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.636211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.636296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.636443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.636639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.636705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.636961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.637026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.637243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.637289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.637434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.637462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.637588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.637616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.637704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.637777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.637994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.638260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.638416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.638565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.638696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.638934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.638999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.639209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.639238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.639458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.639573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.639604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.639731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.639777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.639919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.639953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.640163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.640195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.640344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.640373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.640494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.640522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.640685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.640715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.640804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.640833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.641843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.641908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.642223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.642309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.642461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.642490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.642636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.642665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.642811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.642839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.643050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.643115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.643353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.643382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.643500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.643529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.561 [2024-12-09 05:49:05.643668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.561 [2024-12-09 05:49:05.643698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.561 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.643821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.644127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.644181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.644395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.644423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.644506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.644551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.644773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.644837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.645118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.645182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.645394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.645425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.645593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.645638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.645798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.645832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.645970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.646203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.646436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.646576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.646759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.646931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.647121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.647165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.647286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.647315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.647464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.647492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.647651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.647911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.647971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.648917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.648984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.649143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.649214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.649449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.649477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.649708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.649773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.650044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.650108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.650363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.650540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.650620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.650812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.650840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.650940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.650968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.651212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.651307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.651455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.651483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.651577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.651605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.651775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.651841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.652126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.652191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.652539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.652590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.652748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.652781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.652906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.652940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.562 qpair failed and we were unable to recover it. 00:54:11.562 [2024-12-09 05:49:05.653185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.562 [2024-12-09 05:49:05.653242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.653387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.653430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.653593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.653622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.653847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.653984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.654740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.654974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.655040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.655340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.655369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.655463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.655492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.655653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.655701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.655839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.655872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.656921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.656955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.657984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.658853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.658886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.659931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.659965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.563 [2024-12-09 05:49:05.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.563 [2024-12-09 05:49:05.660136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.563 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.660285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.660314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.660405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.660434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.660593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.660762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.660796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.660945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.660992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.661954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.661987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.662154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.662190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.662355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.662398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.662550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.662599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.662767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.662814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.662955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.663872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.663920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.664942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.664970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.665917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.665949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.666833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.666994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.667042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.667155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.667183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.667332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.564 [2024-12-09 05:49:05.667367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.564 qpair failed and we were unable to recover it. 00:54:11.564 [2024-12-09 05:49:05.667474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.667507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.667638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.667676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.667790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.667819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.667939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.667971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.668869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.668901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.669849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.669961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.670860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.670888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.671867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.672890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.672919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.673855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.674035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.674178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.674297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.674464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.565 [2024-12-09 05:49:05.674584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.565 qpair failed and we were unable to recover it. 00:54:11.565 [2024-12-09 05:49:05.674702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.674730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.674871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.674898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.675955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.675983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.676882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.676993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.677905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.677932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.678931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.678961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.679868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.679897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.680946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.680973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.681087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.681203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.681343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.681480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.566 [2024-12-09 05:49:05.681606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.566 qpair failed and we were unable to recover it. 00:54:11.566 [2024-12-09 05:49:05.681719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.681745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.681830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.681856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.681965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.681990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.682879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.682985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.683897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.683924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.684878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.684906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.685963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.685989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.686867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.686984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.687858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.567 [2024-12-09 05:49:05.687976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.567 [2024-12-09 05:49:05.688003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.567 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.688918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.688946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.689880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.689907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.690900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.690926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.691951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.691977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.692950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.692976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.693990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.694017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.694108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.694134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.694243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.694268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.694361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.694386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.694494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.568 [2024-12-09 05:49:05.694520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.568 qpair failed and we were unable to recover it. 00:54:11.568 [2024-12-09 05:49:05.694636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.694662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.694747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.694772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.694888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.694914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.695838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.695893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.696876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.696995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.697128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.697556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.697746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.697886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.697915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.698172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.698198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.698282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.698310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.698445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.698471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.698583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.698629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.698947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.699261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.699337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.699455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.699628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.699675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.699808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.699904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.700905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.700939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.701118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.701175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.701373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.701416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.701534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.701560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.701736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.701769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.701933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.701967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.702106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.702139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.702370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.702537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.702564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.702703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.702749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.702990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.703284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.703311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.703399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.703425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.569 [2024-12-09 05:49:05.703542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.569 [2024-12-09 05:49:05.703569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.569 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.703698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.703727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.703896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.703961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.704172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.704228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.704379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.704407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.704527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.704573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.704795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.704860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.705145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.705211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.705455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.705483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.705614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.705648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.705764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.705794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.705940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.705986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.706212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.706247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.706381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.706408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.706499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.706525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.706632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.706679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.706842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.706878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.707949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.707976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.708889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.708916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.709871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.709904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.710912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.710962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.711108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.711136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.711229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.711260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.711455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.711491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.711639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.711674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.711787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.570 [2024-12-09 05:49:05.711837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.570 qpair failed and we were unable to recover it. 00:54:11.570 [2024-12-09 05:49:05.712111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.712284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.712455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.712657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.712798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.712918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.712945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.713884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.713911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.714965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.714992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.715952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.715979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.716852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.571 qpair failed and we were unable to recover it. 00:54:11.571 [2024-12-09 05:49:05.716971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.571 [2024-12-09 05:49:05.717006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.717181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.717356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.717523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.717655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.717806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.717967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.718115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.718287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.718465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.718667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.718871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.718926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.719012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.719176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.719219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.719416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.719450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.719574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.719640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.719808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.719872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.720160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.720223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.720473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.720517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.720697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.720733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.720951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.721016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.721222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.721257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.721450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.721491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.721644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.721673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.721937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.721988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.722880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.722974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.723142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.723379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.723524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.723758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.723913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.723942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.572 qpair failed and we were unable to recover it. 00:54:11.572 [2024-12-09 05:49:05.724110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.572 [2024-12-09 05:49:05.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.724293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.724348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.724435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.724463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.724563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.724608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.724740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.724769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.724868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.724900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.725096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.725128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.725363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.725392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.725503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.725545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.725653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.573 [2024-12-09 05:49:05.725709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.573 qpair failed and we were unable to recover it. 00:54:11.573 [2024-12-09 05:49:05.725838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.725868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.725980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.726917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.726951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.727088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.727120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.727230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.727262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.727385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.856 [2024-12-09 05:49:05.727413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.856 qpair failed and we were unable to recover it. 00:54:11.856 [2024-12-09 05:49:05.727505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.727533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.727666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.727712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.727835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.727869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.727990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.728880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.728995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.729855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.729976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.730918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.730950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.857 [2024-12-09 05:49:05.731055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.857 [2024-12-09 05:49:05.731100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.857 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.731919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.731948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.732956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.732992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.733841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.733867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.858 [2024-12-09 05:49:05.734851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.858 qpair failed and we were unable to recover it. 00:54:11.858 [2024-12-09 05:49:05.734996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.735885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.735913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.736883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.737937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.737963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.738098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.738297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.738337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.738472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.859 [2024-12-09 05:49:05.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.859 [2024-12-09 05:49:05.738652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.859 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.738789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.738834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.738975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.739965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.739995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.740960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.740990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.741891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.741917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.860 qpair failed and we were unable to recover it. 00:54:11.860 [2024-12-09 05:49:05.742642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.860 [2024-12-09 05:49:05.742669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.742795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.742825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.742919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.742946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.743882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.744912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.744942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.745966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.745992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.746107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.746134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.746226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.746252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.746354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.861 [2024-12-09 05:49:05.746380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.861 qpair failed and we were unable to recover it. 00:54:11.861 [2024-12-09 05:49:05.746503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.746529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.746680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.746706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.746827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.746907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.746933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.747944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.747983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.748179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.748221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.748347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.748378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.748492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.748523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.748721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.748765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.748913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.748958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.749890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.749919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.750036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.750063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.750181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.862 [2024-12-09 05:49:05.750208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.862 qpair failed and we were unable to recover it. 00:54:11.862 [2024-12-09 05:49:05.750341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.750369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.750465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.750494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.750615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.750642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.750733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.750764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.750884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.750913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.751833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.751984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.752904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.752933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.753063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.753093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.753189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.753216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.753310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.863 [2024-12-09 05:49:05.753359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.863 qpair failed and we were unable to recover it. 00:54:11.863 [2024-12-09 05:49:05.753473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.753499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.753630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.753656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.753733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.753759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.753873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.753900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.754843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.754874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.755856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.755883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.756884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.756913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.757092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.757120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.757255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.757305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.864 qpair failed and we were unable to recover it. 00:54:11.864 [2024-12-09 05:49:05.757442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.864 [2024-12-09 05:49:05.757490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.757617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.757647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.757865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.757895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.758964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.758993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.759907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.759937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.760071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.760115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.760291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.760319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.760487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.760634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.760666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.760857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.760900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.761095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.761255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.761427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.761593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.865 [2024-12-09 05:49:05.761754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.865 qpair failed and we were unable to recover it. 00:54:11.865 [2024-12-09 05:49:05.761896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.761922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.762877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.762978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.763895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.764889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.764978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.866 [2024-12-09 05:49:05.765685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.866 qpair failed and we were unable to recover it. 00:54:11.866 [2024-12-09 05:49:05.765799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.765825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.765914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.765942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.766877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.766905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.767940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.768907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.768938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.769216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.769258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.769400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.769431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.769590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.769637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.867 qpair failed and we were unable to recover it. 00:54:11.867 [2024-12-09 05:49:05.769804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.867 [2024-12-09 05:49:05.769851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.769943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.769977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.770139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.770168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.770295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.770324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.770456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.770501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.770678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.770708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.770878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.770908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.771941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.771967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.772862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.772892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.773002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.773154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.773180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.773310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.773339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.868 [2024-12-09 05:49:05.773465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.868 [2024-12-09 05:49:05.773494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.868 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.773700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.773783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.773809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.773907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.773951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.774972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.775791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.775974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.776893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.776922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.777017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.777047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.777174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.777204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.777356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.869 [2024-12-09 05:49:05.777384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.869 qpair failed and we were unable to recover it. 00:54:11.869 [2024-12-09 05:49:05.777505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.777533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.777716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.777744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.777865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.777894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.778900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.778929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.779849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.779977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.780191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.780339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.780485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.780651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.870 [2024-12-09 05:49:05.780827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.870 [2024-12-09 05:49:05.780857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.870 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.781889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.781919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.782900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.782930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.783849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.783980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.784010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.784145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.784177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.871 [2024-12-09 05:49:05.784342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.871 [2024-12-09 05:49:05.784371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.871 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.784486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.784513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.784609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.784637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.784792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.784819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.784929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.784956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.785896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.785926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.786016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.786177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.786379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.786534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 763391 Killed "${NVMF_APP[@]}" "$@" 00:54:11.872 [2024-12-09 05:49:05.786706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.786927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.786957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:54:11.872 [2024-12-09 05:49:05.787045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.787075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.787190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.787220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b9 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:54:11.872 0 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 [2024-12-09 05:49:05.787358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.787403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.872 qpair failed and we were unable to recover it. 00:54:11.872 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:11.872 [2024-12-09 05:49:05.787541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.872 [2024-12-09 05:49:05.787568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.787665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:11.873 [2024-12-09 05:49:05.787863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.787893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.788968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.788999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.789884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.789929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.790906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.790932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.791015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.791061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.791225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.791328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.791360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.873 [2024-12-09 05:49:05.791481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.873 [2024-12-09 05:49:05.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.873 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.791651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.791682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.791831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.791863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=763940 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:54:11.874 [2024-12-09 05:49:05.791997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 763940 00:54:11.874 [2024-12-09 05:49:05.792159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 763940 ']' 00:54:11.874 [2024-12-09 05:49:05.792360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:11.874 [2024-12-09 05:49:05.792512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.792660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:11.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:11.874 [2024-12-09 05:49:05.792817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.792848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:11.874 [2024-12-09 05:49:05.792976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 05:49:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.793907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.793938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.874 [2024-12-09 05:49:05.794892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.874 [2024-12-09 05:49:05.794920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.874 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.795889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.796934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.796963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.797912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.797939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.798863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.799018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.799049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.799142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.799171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.799296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.799334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.799429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.875 [2024-12-09 05:49:05.799460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.875 qpair failed and we were unable to recover it. 00:54:11.875 [2024-12-09 05:49:05.799591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.799621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.799775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.799805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.799932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.799962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.800887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.800917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.801916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.801946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.802876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.802910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.803044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.803167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.803301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.803432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.876 [2024-12-09 05:49:05.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.876 qpair failed and we were unable to recover it. 00:54:11.876 [2024-12-09 05:49:05.803741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.803768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.803933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.803961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.804871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.804980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.805949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.805978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.806960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.806992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.807090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.807117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.807256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.807291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.807404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.807431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.877 [2024-12-09 05:49:05.807512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.877 [2024-12-09 05:49:05.807538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.877 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.807662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.807703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.807841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.807877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.808854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.808905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.809837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.809989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.810148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.810332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.810501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.810743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.810945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.810974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.811860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.811890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.812013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.812043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.812180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.812353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.812386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.878 qpair failed and we were unable to recover it. 00:54:11.878 [2024-12-09 05:49:05.812509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.878 [2024-12-09 05:49:05.812541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.812668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.812847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.812974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.813928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.813954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.814867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.814897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.815958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.815987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.816117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.816159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.816289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.816316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.816400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.816427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.879 qpair failed and we were unable to recover it. 00:54:11.879 [2024-12-09 05:49:05.816508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.879 [2024-12-09 05:49:05.816535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.816650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.816677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.816788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.816814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.816935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.816968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.817987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.818134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.818283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.818441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.818649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.818890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.819934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.819985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.820864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.820914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.821006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.821036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.821195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.821225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.821343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.880 [2024-12-09 05:49:05.821375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.880 qpair failed and we were unable to recover it. 00:54:11.880 [2024-12-09 05:49:05.821468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.821498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.821623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.821653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.821764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.821811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.821898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.821925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.822917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.822948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.823902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.823933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.824899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.824925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.825890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.825924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.826051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.881 [2024-12-09 05:49:05.826085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.881 qpair failed and we were unable to recover it. 00:54:11.881 [2024-12-09 05:49:05.826225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.826265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.826383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.826415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.826574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.826627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.826789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.826833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.826948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.826975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.827934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.827996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.828114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.828146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.828258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.828325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.828469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.828511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.828651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.828678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.829838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.829883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.830010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.830039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.830156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.830332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.830379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.882 [2024-12-09 05:49:05.830522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.882 [2024-12-09 05:49:05.830568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.882 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.830696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.830724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.830867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.830910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.831947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.831991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.832904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.832934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.833869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.833991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.834144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.834256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.834387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.834635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.834897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.834946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.835136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.835307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.835442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.835658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.835784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.835967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.883 [2024-12-09 05:49:05.836016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.883 qpair failed and we were unable to recover it. 00:54:11.883 [2024-12-09 05:49:05.836103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.836137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.836294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.836345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.836522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.836701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.836749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.836908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.836939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.837894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.837926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.838806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.838976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.839145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.839308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.839492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.839677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.839845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.839885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.840043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.840227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.840256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.840509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.840644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.840871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.840924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.841018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.841048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.841204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.841234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.841357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.841391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.841544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.841577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.884 [2024-12-09 05:49:05.841740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.884 [2024-12-09 05:49:05.841775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.884 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.841932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.841962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.842123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.842281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.842668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.842845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.842967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.843151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.843301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.843479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.843668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.843892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.843949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.844952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.844994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845574] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:54:11.885 [2024-12-09 05:49:05.845619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 [2024-12-09 05:49:05.845665] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.845950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.845979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.846933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.846985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.847173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.885 [2024-12-09 05:49:05.847206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.885 qpair failed and we were unable to recover it. 00:54:11.885 [2024-12-09 05:49:05.847311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.847353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.847529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.847566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.847689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.847726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.847861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.848048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.848220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.848251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.848360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.848393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.848506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.848544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.848820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.849023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.849088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.849282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.849313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.849480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.849523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.849660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.849702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.849926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.849993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.850174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.850205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.850330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.850360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.850493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.850527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.850693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.850841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.850871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.851946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.851976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.852888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.852993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.853025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.853184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.853219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.886 qpair failed and we were unable to recover it. 00:54:11.886 [2024-12-09 05:49:05.853322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.886 [2024-12-09 05:49:05.853357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.853515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.853564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.853694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.853727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.853877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.853907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.854847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.854896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.855939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.856132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.856328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.856498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.856685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.856836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.856979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.857862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.857907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.887 [2024-12-09 05:49:05.858883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.887 [2024-12-09 05:49:05.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.887 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.859955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.859982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.860891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.860919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.861967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.861995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.862888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.862976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.863005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.863117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.863144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.888 [2024-12-09 05:49:05.863243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.888 [2024-12-09 05:49:05.863280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.888 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.863387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.863417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.863511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.863539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.863682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.863710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.863827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.863871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.863986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.864909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.864936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.865938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.865966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.866952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.866978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.867904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.867932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.868061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.868086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.889 qpair failed and we were unable to recover it. 00:54:11.889 [2024-12-09 05:49:05.868217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.889 [2024-12-09 05:49:05.868243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.868408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.868578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.868610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.868708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.868740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.868920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.868956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.869936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.869962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.870075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.870101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.870180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.870207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.870385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.870434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.870600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.870651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.870835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.870890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.871060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.871134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.871336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.871387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.871562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.871612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.871716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.871925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.871952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.872970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.872997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.873101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.873132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.873266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.873315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.873440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.873469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.873632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.873682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.873834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.873884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.874024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.874052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.874154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.874182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.874298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.890 [2024-12-09 05:49:05.874347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.890 qpair failed and we were unable to recover it. 00:54:11.890 [2024-12-09 05:49:05.874487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.874520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.874713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.874747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.874890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.874925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.875041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.875250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.875415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.875452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.875565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.875621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.875812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.876916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.876943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.877809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.877858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.878858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.878914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.879121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.879334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.879534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.879713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.879866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.879989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.880024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.880144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.880171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.880282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.880408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.891 [2024-12-09 05:49:05.880435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.891 qpair failed and we were unable to recover it. 00:54:11.891 [2024-12-09 05:49:05.880528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.880555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.880685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.880712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.880823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.880860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.880978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.881122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.881316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.881490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.881713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.881885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.881913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.882961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.882987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.883137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.883253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.883455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.883594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.883799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.883980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.884837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.884971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.885919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.885953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.892 qpair failed and we were unable to recover it. 00:54:11.892 [2024-12-09 05:49:05.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.892 [2024-12-09 05:49:05.886138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.886233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.886285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.886433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.886483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.886617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.886668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.886881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.886918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.887889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.887915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.888880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.888986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.889966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.890858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.890885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.891052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.891176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.891325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.891465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.893 [2024-12-09 05:49:05.891612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.893 qpair failed and we were unable to recover it. 00:54:11.893 [2024-12-09 05:49:05.891692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.891719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.891834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.891860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.891960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.891986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.892872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.892899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.893958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.893998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.894943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.894969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.895898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.895926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.894 [2024-12-09 05:49:05.896722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.894 [2024-12-09 05:49:05.896748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.894 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.896840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.896977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.897831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.897857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.898864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.898890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.899905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.899930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.900927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.900952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.901839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.902007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.902151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.895 [2024-12-09 05:49:05.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.895 qpair failed and we were unable to recover it. 00:54:11.895 [2024-12-09 05:49:05.902282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.902405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.902543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.902707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.902859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.902969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.902995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.903938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.903966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.904943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.904969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.905960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.905990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.906870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.906897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.896 [2024-12-09 05:49:05.907763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.896 [2024-12-09 05:49:05.907788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.896 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.907869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.907894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.907983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.908884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.908910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.909895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.909991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.910896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.910922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.911943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.911968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.912081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.912106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.912186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.912211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.912336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.897 [2024-12-09 05:49:05.912363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.897 qpair failed and we were unable to recover it. 00:54:11.897 [2024-12-09 05:49:05.912444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.912470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.912596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.912622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.912766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.912876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.912906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.913960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.914932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.914960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.915872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.915898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.916880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.916997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.898 qpair failed and we were unable to recover it. 00:54:11.898 [2024-12-09 05:49:05.917840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.898 [2024-12-09 05:49:05.917865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.917976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.918867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.918893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.919880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.919907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.920917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.920943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.921908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.921934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.922839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.923006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.899 [2024-12-09 05:49:05.923133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.899 [2024-12-09 05:49:05.923173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.899 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.923916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.923943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.924880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.924905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.925874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.925902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.926930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.926958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:11.900 [2024-12-09 05:49:05.927819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.927940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.927966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.928173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.928319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.928453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.900 [2024-12-09 05:49:05.928599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.900 qpair failed and we were unable to recover it. 00:54:11.900 [2024-12-09 05:49:05.928709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.928734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.928849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.928877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.928989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.929882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.929908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.930848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.930980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.931935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.931960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.932944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.932970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.901 [2024-12-09 05:49:05.933745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.901 [2024-12-09 05:49:05.933771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.901 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.933880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.933911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.934861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.934890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.935906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.935998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.936890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.936929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.937862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.937887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.938863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.938982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.939865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.939983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.940873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.940987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.941919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.941946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.942024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.942050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.942161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.902 [2024-12-09 05:49:05.942189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.902 qpair failed and we were unable to recover it. 00:54:11.902 [2024-12-09 05:49:05.942313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.942416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.942556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.942669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.942807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.942929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.942957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.943868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.943894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.944883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.944909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.945952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.945978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.946918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.947885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.947911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.948953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.948981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.949900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.950943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.950981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.951080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.903 [2024-12-09 05:49:05.951107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.903 qpair failed and we were unable to recover it. 00:54:11.903 [2024-12-09 05:49:05.951190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.951304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.951473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.951615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.951752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.951895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.951927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.952870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.953913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.953939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.954944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.954970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.955916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.955942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.956886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.956979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.957957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.957983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.958968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.958994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.959109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.959135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.959246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.959278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.904 qpair failed and we were unable to recover it. 00:54:11.904 [2024-12-09 05:49:05.959423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.904 [2024-12-09 05:49:05.959450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.959566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.959696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.959722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.959835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.959861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.959942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.959967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.960927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.960953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.961864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.961977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.962900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.962928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.963916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.963941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.964890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.964915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.965931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.965956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.966864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.966890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.967955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.967981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.968108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.905 [2024-12-09 05:49:05.968147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.905 qpair failed and we were unable to recover it. 00:54:11.905 [2024-12-09 05:49:05.968303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.968332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.968417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.968443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.968560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.968585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.968726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.968752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.968866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.968892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.969928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.969955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.970925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.971887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.971978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.972930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.973917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.973943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.974883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.974987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.975908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.975932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.976885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.976911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.906 [2024-12-09 05:49:05.977026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.906 [2024-12-09 05:49:05.977051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.906 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.977950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.977976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.978832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.978983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.979910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.979936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.980865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.980976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.981837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.981863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.907 [2024-12-09 05:49:05.982638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.907 [2024-12-09 05:49:05.982664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.907 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.982828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.982989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.983957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.983982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.984849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.984991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.985968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.985996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.986875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.986901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.908 [2024-12-09 05:49:05.987568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.908 [2024-12-09 05:49:05.987595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.908 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.987694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.987733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.987821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.987848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.987929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.987955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:11.909 [2024-12-09 05:49:05.988331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b9[2024-12-09 05:49:05.988333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:54:11.909 at runtime. 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:11.909 [2024-12-09 05:49:05.988364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:11.909 [2024-12-09 05:49:05.988374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:11.909 [2024-12-09 05:49:05.988451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.988872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.988975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.989958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.989930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:54:11.909 [2024-12-09 05:49:05.989980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:54:11.909 [2024-12-09 05:49:05.990042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:54:11.909 [2024-12-09 05:49:05.990009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:54:11.909 [2024-12-09 05:49:05.990168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.990912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.990938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.909 [2024-12-09 05:49:05.991743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.909 [2024-12-09 05:49:05.991769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.909 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.991886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.991913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.992923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.992948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.993905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.993997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.994954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.994981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.995895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.995923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.996023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.996049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.996140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.996166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.996280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.910 [2024-12-09 05:49:05.996306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.910 qpair failed and we were unable to recover it. 00:54:11.910 [2024-12-09 05:49:05.996392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.996505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.996616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.996725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.996862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.996962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.996987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.997892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.997917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.998922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.998951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:05.999872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:05.999990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:06.000017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:06.000099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:06.000126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:06.000207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.911 [2024-12-09 05:49:06.000238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.911 qpair failed and we were unable to recover it. 00:54:11.911 [2024-12-09 05:49:06.000335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.000437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.000542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.000785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.000890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.000919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.001999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.002874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.002980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.003885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.003922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.912 [2024-12-09 05:49:06.004870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.912 qpair failed and we were unable to recover it. 00:54:11.912 [2024-12-09 05:49:06.004943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.004968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.005901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.005992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.006901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.006992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.007969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.007994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.008904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.008929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.009067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.009211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.009344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.009467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.913 [2024-12-09 05:49:06.009578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.913 qpair failed and we were unable to recover it. 00:54:11.913 [2024-12-09 05:49:06.009695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.009721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.009825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.009852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.009949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.009975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.010913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.010939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.011893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.011986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.012856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.013963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.013988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.014099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.014126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.014212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.014239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.914 [2024-12-09 05:49:06.014338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.914 [2024-12-09 05:49:06.014365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.914 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.014444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.014471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.014552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.014578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.014671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.014697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.014783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.014809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.014892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.014917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.015884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.015911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.016858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.016977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.017933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.018942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.915 [2024-12-09 05:49:06.018969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.915 qpair failed and we were unable to recover it. 00:54:11.915 [2024-12-09 05:49:06.019055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.019964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.019991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.020921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.020949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.021910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.021938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.022909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.022983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.023008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.023087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.023113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.916 qpair failed and we were unable to recover it. 00:54:11.916 [2024-12-09 05:49:06.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.916 [2024-12-09 05:49:06.023226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.023358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.023502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.023730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.023911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.023992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.024992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.025923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.026924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.026949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.917 [2024-12-09 05:49:06.027919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.917 [2024-12-09 05:49:06.027945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.917 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.028959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.028986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.029918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.029993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.030908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.030935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.031872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.031899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.032051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.032158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.032299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.032414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.918 [2024-12-09 05:49:06.032514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.918 qpair failed and we were unable to recover it. 00:54:11.918 [2024-12-09 05:49:06.032633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.032659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.032751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.032778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.032895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.032921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.033965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.033992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.034909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.034994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.035908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.035933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.036017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.036043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.036125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.036151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.036268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.036303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.919 qpair failed and we were unable to recover it. 00:54:11.919 [2024-12-09 05:49:06.036393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.919 [2024-12-09 05:49:06.036419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.036499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.036526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.036644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.036670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.036777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.036803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.036897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.036923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.037939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.038876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.038905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.039890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.039977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.040002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.040094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.040121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.040228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.040254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.040361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.040389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.040476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.920 [2024-12-09 05:49:06.040506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.920 qpair failed and we were unable to recover it. 00:54:11.920 [2024-12-09 05:49:06.040641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.040666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.040776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.040803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.040885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.040912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.041859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.041885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.042916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.042943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.043858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.043981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.044920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.044998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.045025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.045108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.921 [2024-12-09 05:49:06.045134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.921 qpair failed and we were unable to recover it. 00:54:11.921 [2024-12-09 05:49:06.045239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.045898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.045924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.046953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.046985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.047884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.047909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.048897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.048937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.922 [2024-12-09 05:49:06.049734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.922 qpair failed and we were unable to recover it. 00:54:11.922 [2024-12-09 05:49:06.049813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.049838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.049918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.049943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.050936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.050979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.051994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.052955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.053950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.053976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.054083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.054110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.054194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:11.923 [2024-12-09 05:49:06.054220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:11.923 qpair failed and we were unable to recover it. 00:54:11.923 [2024-12-09 05:49:06.054330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.054439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.054672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.054791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.054908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.054936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.055894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.055981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.056955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.056985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.057885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.057912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.058001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.058030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.058115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.058143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.058234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.058260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.058351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.207 [2024-12-09 05:49:06.058377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.207 qpair failed and we were unable to recover it. 00:54:12.207 [2024-12-09 05:49:06.058461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.058488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.058582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.058610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.058707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.058734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.058832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.058858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.058941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.058968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.059876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.059973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.060910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.061891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.061988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.208 [2024-12-09 05:49:06.062855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.208 qpair failed and we were unable to recover it. 00:54:12.208 [2024-12-09 05:49:06.062942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.062967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.063899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.063994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.064954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.065959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.066894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.066922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.067006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.067033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.067116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.067141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.067232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.067259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.209 qpair failed and we were unable to recover it. 00:54:12.209 [2024-12-09 05:49:06.067353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.209 [2024-12-09 05:49:06.067380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.067472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.067513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.067611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.067639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.067731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.067758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.067865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.067890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.067980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.068889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.068985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.069886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.069912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.070961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.070987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.071986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.072016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.072113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.072142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.210 qpair failed and we were unable to recover it. 00:54:12.210 [2024-12-09 05:49:06.072227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.210 [2024-12-09 05:49:06.072253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.072926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.072952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.073919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.073949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.074891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.074983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.075903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.211 [2024-12-09 05:49:06.075928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.211 qpair failed and we were unable to recover it. 00:54:12.211 [2024-12-09 05:49:06.076009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.076959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.076985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.077932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.077959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.078878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.078991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.079887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.079913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.080030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.080140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.080247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.080362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.212 [2024-12-09 05:49:06.080461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.212 qpair failed and we were unable to recover it. 00:54:12.212 [2024-12-09 05:49:06.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.080562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.080672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.080698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.080775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.080800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.080876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.080901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.080975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.081898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.081924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.082947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.082974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.083928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.083957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.213 [2024-12-09 05:49:06.084848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.213 qpair failed and we were unable to recover it. 00:54:12.213 [2024-12-09 05:49:06.084926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.084952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.085968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.085996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.086865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.086992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.087901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.087991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.088871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.088997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.089024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.089109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.089137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.089256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.089294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.089383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.089409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.214 [2024-12-09 05:49:06.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.214 [2024-12-09 05:49:06.089520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.214 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.089596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.089622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.089693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.089718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.089825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.089908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.089933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.090892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.090979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.091870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.091898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.092863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.092983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.215 [2024-12-09 05:49:06.093910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.215 [2024-12-09 05:49:06.093936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.215 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.094883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.094990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.095871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.095996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.096961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.096989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.216 [2024-12-09 05:49:06.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.216 qpair failed and we were unable to recover it. 00:54:12.216 [2024-12-09 05:49:06.097988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.098973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.098999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.099934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.099962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.100930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.100955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.101886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.101912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.102005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.102035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.102125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.102150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.102231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.102256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.102350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.217 [2024-12-09 05:49:06.102380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.217 qpair failed and we were unable to recover it. 00:54:12.217 [2024-12-09 05:49:06.102457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.102483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.102604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.102634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.102717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.102743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.102846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.102871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.102958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.102985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.103949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.103974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.104896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.104922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.105909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.105938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.218 [2024-12-09 05:49:06.106864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.218 qpair failed and we were unable to recover it. 00:54:12.218 [2024-12-09 05:49:06.106947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.106973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.107886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.107914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.108878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.108985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.109946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.109971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.110918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.110997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.111023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.111114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.111144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.111233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.111258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.111342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.219 [2024-12-09 05:49:06.111371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.219 qpair failed and we were unable to recover it. 00:54:12.219 [2024-12-09 05:49:06.111453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.111480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.111604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.111631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.111711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.111736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.111819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.111847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.111930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.111958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.112909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.112994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.113914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.113940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.114916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.114944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.115032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.115058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.115131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.115156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.115257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.115346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.115374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.220 [2024-12-09 05:49:06.115460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.220 [2024-12-09 05:49:06.115486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.220 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.115571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.115599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.115683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.115709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.115791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.115819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.115900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.115927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.116793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:12.221 [2024-12-09 05:49:06.116899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.116988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:54:12.221 [2024-12-09 05:49:06.117220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:12.221 [2024-12-09 05:49:06.117656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.117793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:12.221 [2024-12-09 05:49:06.117906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.117934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.221 [2024-12-09 05:49:06.118186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.118948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.118974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.221 [2024-12-09 05:49:06.119666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.221 [2024-12-09 05:49:06.119692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.221 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.119781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.119807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.119919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.119962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.120907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.120994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.121925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.121955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.122896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.122923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.123916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.123942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.124023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.124048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.124160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.124188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.124280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.222 [2024-12-09 05:49:06.124307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.222 qpair failed and we were unable to recover it. 00:54:12.222 [2024-12-09 05:49:06.124394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.124419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.124498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.124524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.124613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.124640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.124726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.124752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.124862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.124888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.124999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.125948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.125972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.126886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.126999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.127888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.127982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.223 [2024-12-09 05:49:06.128862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.223 qpair failed and we were unable to recover it. 00:54:12.223 [2024-12-09 05:49:06.128938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.128962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.129970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.130923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.130950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.131916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.131948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.132888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.132913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.133004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.133033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.133111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.133136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.133225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.133251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.224 [2024-12-09 05:49:06.133368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.224 qpair failed and we were unable to recover it. 00:54:12.224 [2024-12-09 05:49:06.133462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.133490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.133591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.133618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.133700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.133737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.133823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.133850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.133929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.133955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.134883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.134910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.135904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.135931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.136950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.137070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.137096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.137180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.137211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.137301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.137329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.137404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.137430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.225 [2024-12-09 05:49:06.137519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.225 [2024-12-09 05:49:06.137545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.225 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.137626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.137653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.137733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.137759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.137846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.137871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.137953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.137979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.138917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.138942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.139907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.139933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:12.226 [2024-12-09 05:49:06.140573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.140813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:54:12.226 [2024-12-09 05:49:06.140927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.140955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.226 [2024-12-09 05:49:06.141189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.226 [2024-12-09 05:49:06.141429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.226 [2024-12-09 05:49:06.141786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.226 [2024-12-09 05:49:06.141811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.226 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.141903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.141931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.142857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.142975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.143909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.143934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.144897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.144924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.145928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.145956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.146067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.146146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.146173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.227 [2024-12-09 05:49:06.146264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.227 [2024-12-09 05:49:06.146304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.227 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.146422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.146448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.146533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.146564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.146660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.146687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.146770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.146797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.146914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.146941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.147961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.147992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.148917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.148944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.149970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.149995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.228 [2024-12-09 05:49:06.150876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.228 [2024-12-09 05:49:06.150902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.228 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.151907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.151998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.152896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.152983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.153889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.153915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.154931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.154957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.229 [2024-12-09 05:49:06.155050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.229 [2024-12-09 05:49:06.155084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.229 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.155891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.155983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.156944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.156969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.157972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.157998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.158883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.158965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.159001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.159083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.159125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.159208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.159234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.159337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.159364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.230 [2024-12-09 05:49:06.159452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.230 [2024-12-09 05:49:06.159478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.230 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.159600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 A controller has encountered a failure and is being reset. 00:54:12.231 [2024-12-09 05:49:06.159697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.159724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.159815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.159842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.159926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.159956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa08000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.160904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.160929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.161932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.161958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa14000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe16fa0 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.162891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa0c000b90 with addr=10.0.0.2, port=4420 00:54:12.231 qpair failed and we were unable to recover it. 00:54:12.231 [2024-12-09 05:49:06.162993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:54:12.231 [2024-12-09 05:49:06.163030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe24f30 with addr=10.0.0.2, port=4420 00:54:12.231 [2024-12-09 05:49:06.163048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe24f30 is same with the state(6) to be set 00:54:12.231 [2024-12-09 05:49:06.163073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe24f30 (9): Bad file descriptor 00:54:12.231 [2024-12-09 05:49:06.163091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:54:12.231 [2024-12-09 05:49:06.163105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:54:12.231 [2024-12-09 05:49:06.163119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:54:12.231 Unable to reset the controller. 00:54:12.231 Malloc0 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.231 [2024-12-09 05:49:06.192227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.231 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.232 [2024-12-09 05:49:06.220530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:12.232 05:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 763413 00:54:13.163 Controller properly reset. 00:54:18.418 Initializing NVMe Controllers 00:54:18.418 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:18.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:18.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:54:18.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:54:18.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:54:18.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:54:18.418 Initialization complete. Launching workers. 00:54:18.418 Starting thread on core 1 00:54:18.418 Starting thread on core 2 00:54:18.418 Starting thread on core 3 00:54:18.418 Starting thread on core 0 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:54:18.418 00:54:18.418 real 0m10.703s 00:54:18.418 user 0m34.024s 00:54:18.418 sys 0m7.341s 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:54:18.418 ************************************ 00:54:18.418 END TEST nvmf_target_disconnect_tc2 00:54:18.418 ************************************ 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:18.418 rmmod nvme_tcp 00:54:18.418 rmmod nvme_fabrics 00:54:18.418 rmmod nvme_keyring 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 763940 ']' 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 763940 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 763940 ']' 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 763940 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763940 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:54:18.418 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763940' 00:54:18.419 killing process with pid 763940 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 763940 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 763940 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:18.419 05:49:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:54:20.377 00:54:20.377 real 0m15.958s 00:54:20.377 user 0m59.830s 00:54:20.377 sys 0m9.981s 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:54:20.377 ************************************ 00:54:20.377 END TEST nvmf_target_disconnect 00:54:20.377 ************************************ 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:54:20.377 00:54:20.377 real 5m5.001s 00:54:20.377 user 11m4.497s 00:54:20.377 sys 1m15.957s 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:20.377 05:49:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:54:20.377 ************************************ 00:54:20.377 END TEST nvmf_host 00:54:20.377 ************************************ 00:54:20.377 05:49:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:54:20.377 05:49:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:54:20.377 05:49:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:54:20.377 05:49:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:54:20.377 05:49:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:20.377 05:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:20.377 ************************************ 00:54:20.377 START TEST nvmf_target_core_interrupt_mode 00:54:20.377 ************************************ 00:54:20.377 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:54:20.635 * Looking for test storage... 00:54:20.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:54:20.635 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:54:20.635 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:54:20.635 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:54:20.635 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:54:20.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.636 --rc genhtml_branch_coverage=1 00:54:20.636 --rc genhtml_function_coverage=1 00:54:20.636 --rc genhtml_legend=1 00:54:20.636 --rc geninfo_all_blocks=1 00:54:20.636 --rc geninfo_unexecuted_blocks=1 00:54:20.636 00:54:20.636 ' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:54:20.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.636 --rc genhtml_branch_coverage=1 00:54:20.636 --rc genhtml_function_coverage=1 00:54:20.636 --rc genhtml_legend=1 00:54:20.636 --rc geninfo_all_blocks=1 00:54:20.636 --rc geninfo_unexecuted_blocks=1 00:54:20.636 00:54:20.636 ' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:54:20.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.636 --rc genhtml_branch_coverage=1 00:54:20.636 --rc genhtml_function_coverage=1 00:54:20.636 --rc genhtml_legend=1 00:54:20.636 --rc geninfo_all_blocks=1 00:54:20.636 --rc geninfo_unexecuted_blocks=1 00:54:20.636 00:54:20.636 ' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:54:20.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.636 --rc genhtml_branch_coverage=1 00:54:20.636 --rc genhtml_function_coverage=1 00:54:20.636 --rc genhtml_legend=1 00:54:20.636 --rc geninfo_all_blocks=1 00:54:20.636 --rc geninfo_unexecuted_blocks=1 00:54:20.636 00:54:20.636 ' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:20.636 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:54:20.637 ************************************ 00:54:20.637 START TEST nvmf_abort 00:54:20.637 ************************************ 00:54:20.637 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:54:20.637 * Looking for test storage... 00:54:20.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:54:20.637 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:54:20.637 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:54:20.637 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:54:20.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.896 --rc genhtml_branch_coverage=1 00:54:20.896 --rc genhtml_function_coverage=1 00:54:20.896 --rc genhtml_legend=1 00:54:20.896 --rc geninfo_all_blocks=1 00:54:20.896 --rc geninfo_unexecuted_blocks=1 00:54:20.896 00:54:20.896 ' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:54:20.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.896 --rc genhtml_branch_coverage=1 00:54:20.896 --rc genhtml_function_coverage=1 00:54:20.896 --rc genhtml_legend=1 00:54:20.896 --rc geninfo_all_blocks=1 00:54:20.896 --rc geninfo_unexecuted_blocks=1 00:54:20.896 00:54:20.896 ' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:54:20.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.896 --rc genhtml_branch_coverage=1 00:54:20.896 --rc genhtml_function_coverage=1 00:54:20.896 --rc genhtml_legend=1 00:54:20.896 --rc geninfo_all_blocks=1 00:54:20.896 --rc geninfo_unexecuted_blocks=1 00:54:20.896 00:54:20.896 ' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:54:20.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:20.896 --rc genhtml_branch_coverage=1 00:54:20.896 --rc genhtml_function_coverage=1 00:54:20.896 --rc genhtml_legend=1 00:54:20.896 --rc geninfo_all_blocks=1 00:54:20.896 --rc geninfo_unexecuted_blocks=1 00:54:20.896 00:54:20.896 ' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:20.896 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:54:20.897 05:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:54:23.427 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:54:23.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:54:23.428 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:54:23.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:54:23.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:54:23.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:23.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:54:23.428 00:54:23.428 --- 10.0.0.2 ping statistics --- 00:54:23.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:23.428 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:54:23.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:23.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:54:23.428 00:54:23.428 --- 10.0.0.1 ping statistics --- 00:54:23.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:23.428 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=766760 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 766760 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 766760 ']' 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:23.428 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:23.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 [2024-12-09 05:49:17.316692] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:54:23.429 [2024-12-09 05:49:17.317733] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:54:23.429 [2024-12-09 05:49:17.317802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:23.429 [2024-12-09 05:49:17.388478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:23.429 [2024-12-09 05:49:17.442380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:23.429 [2024-12-09 05:49:17.442438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:23.429 [2024-12-09 05:49:17.442452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:23.429 [2024-12-09 05:49:17.442472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:23.429 [2024-12-09 05:49:17.442482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:23.429 [2024-12-09 05:49:17.443938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:23.429 [2024-12-09 05:49:17.444017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:54:23.429 [2024-12-09 05:49:17.444020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:23.429 [2024-12-09 05:49:17.528853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:54:23.429 [2024-12-09 05:49:17.529074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:54:23.429 [2024-12-09 05:49:17.529087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:54:23.429 [2024-12-09 05:49:17.529352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 [2024-12-09 05:49:17.588709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 Malloc0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 Delay0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.429 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.687 [2024-12-09 05:49:17.660923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:23.687 05:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:54:23.687 [2024-12-09 05:49:17.729856] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:54:26.217 Initializing NVMe Controllers 00:54:26.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:54:26.217 controller IO queue size 128 less than required 00:54:26.217 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:54:26.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:54:26.217 Initialization complete. Launching workers. 00:54:26.217 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28258 00:54:26.217 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28315, failed to submit 66 00:54:26.217 success 28258, unsuccessful 57, failed 0 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:26.217 rmmod nvme_tcp 00:54:26.217 rmmod nvme_fabrics 00:54:26.217 rmmod nvme_keyring 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 766760 ']' 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 766760 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 766760 ']' 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 766760 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766760 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766760' 00:54:26.217 killing process with pid 766760 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 766760 00:54:26.217 05:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 766760 00:54:26.217 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:26.217 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:26.217 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:26.217 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:26.218 05:49:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:54:28.126 00:54:28.126 real 0m7.530s 00:54:28.126 user 0m9.375s 00:54:28.126 sys 0m3.045s 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:54:28.126 ************************************ 00:54:28.126 END TEST nvmf_abort 00:54:28.126 ************************************ 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:28.126 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:54:28.385 ************************************ 00:54:28.385 START TEST nvmf_ns_hotplug_stress 00:54:28.385 ************************************ 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:54:28.385 * Looking for test storage... 00:54:28.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:54:28.385 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:54:28.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:28.386 --rc genhtml_branch_coverage=1 00:54:28.386 --rc genhtml_function_coverage=1 00:54:28.386 --rc genhtml_legend=1 00:54:28.386 --rc geninfo_all_blocks=1 00:54:28.386 --rc geninfo_unexecuted_blocks=1 00:54:28.386 00:54:28.386 ' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:54:28.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:28.386 --rc genhtml_branch_coverage=1 00:54:28.386 --rc genhtml_function_coverage=1 00:54:28.386 --rc genhtml_legend=1 00:54:28.386 --rc geninfo_all_blocks=1 00:54:28.386 --rc geninfo_unexecuted_blocks=1 00:54:28.386 00:54:28.386 ' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:54:28.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:28.386 --rc genhtml_branch_coverage=1 00:54:28.386 --rc genhtml_function_coverage=1 00:54:28.386 --rc genhtml_legend=1 00:54:28.386 --rc geninfo_all_blocks=1 00:54:28.386 --rc geninfo_unexecuted_blocks=1 00:54:28.386 00:54:28.386 ' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:54:28.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:28.386 --rc genhtml_branch_coverage=1 00:54:28.386 --rc genhtml_function_coverage=1 00:54:28.386 --rc genhtml_legend=1 00:54:28.386 --rc geninfo_all_blocks=1 00:54:28.386 --rc geninfo_unexecuted_blocks=1 00:54:28.386 00:54:28.386 ' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:54:28.386 05:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:54:30.920 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:54:30.920 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:54:30.920 Found net devices under 0000:0a:00.0: cvl_0_0 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:54:30.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:54:30.920 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:54:30.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:30.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:54:30.921 00:54:30.921 --- 10.0.0.2 ping statistics --- 00:54:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:30.921 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:54:30.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:30.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:54:30.921 00:54:30.921 --- 10.0.0.1 ping statistics --- 00:54:30.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:30.921 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=768993 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 768993 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 768993 ']' 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:30.921 05:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:54:30.921 [2024-12-09 05:49:24.785874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:54:30.921 [2024-12-09 05:49:24.787029] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:54:30.921 [2024-12-09 05:49:24.787091] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:30.921 [2024-12-09 05:49:24.862390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:30.921 [2024-12-09 05:49:24.922435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:30.921 [2024-12-09 05:49:24.922502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:30.921 [2024-12-09 05:49:24.922516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:30.921 [2024-12-09 05:49:24.922528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:30.921 [2024-12-09 05:49:24.922538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:30.921 [2024-12-09 05:49:24.924081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:30.921 [2024-12-09 05:49:24.924145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:54:30.921 [2024-12-09 05:49:24.924149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:30.921 [2024-12-09 05:49:25.022524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:54:30.921 [2024-12-09 05:49:25.022761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:54:30.921 [2024-12-09 05:49:25.022775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:54:30.921 [2024-12-09 05:49:25.023013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:54:30.921 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:31.179 [2024-12-09 05:49:25.328877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:31.179 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:31.437 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:31.695 [2024-12-09 05:49:25.881305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:31.695 05:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:54:31.952 05:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:54:32.517 Malloc0 00:54:32.517 05:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:54:32.517 Delay0 00:54:32.517 05:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:33.081 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:54:33.337 NULL1 00:54:33.337 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:54:33.594 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=769399 00:54:33.594 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:54:33.594 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:33.594 05:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:35.029 Read completed with error (sct=0, sc=11) 00:54:35.029 05:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:35.029 05:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:54:35.029 05:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:54:35.313 true 00:54:35.570 05:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:35.570 05:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:36.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:36.137 05:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:36.394 05:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:54:36.395 05:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:54:36.652 true 00:54:36.652 05:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:36.652 05:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:36.910 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:37.168 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:54:37.168 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:54:37.425 true 00:54:37.425 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:37.425 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:37.990 05:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:37.990 05:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:54:37.990 05:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:54:38.247 true 00:54:38.248 05:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:38.248 05:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:39.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:39.180 05:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:39.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:39.438 05:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:54:39.438 05:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:54:39.695 true 00:54:39.953 05:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:39.953 05:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:40.211 05:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:40.469 05:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:54:40.469 05:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:54:40.728 true 00:54:40.728 05:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:40.728 05:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:40.986 05:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:41.243 05:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:54:41.243 05:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:54:41.500 true 00:54:41.500 05:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:41.500 05:49:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:42.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:42.429 05:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:42.686 05:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:54:42.686 05:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:54:42.942 true 00:54:42.942 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:42.942 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:43.198 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:43.455 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:54:43.455 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:54:43.711 true 00:54:43.711 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:43.711 05:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:43.967 05:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:44.223 05:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:54:44.224 05:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:54:44.480 true 00:54:44.480 05:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:44.480 05:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:45.847 05:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:45.847 05:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:54:45.847 05:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:54:46.103 true 00:54:46.103 05:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:46.103 05:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:46.361 05:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:46.619 05:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:54:46.619 05:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:54:46.876 true 00:54:46.876 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:46.876 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:47.133 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:47.391 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:54:47.391 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:54:47.649 true 00:54:47.649 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:47.649 05:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:48.582 05:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:48.840 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:54:48.840 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:54:49.097 true 00:54:49.097 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:49.097 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:49.663 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:49.663 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:54:49.663 05:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:54:49.921 true 00:54:49.921 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:49.921 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:50.179 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:50.745 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:54:50.745 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:54:50.745 true 00:54:50.745 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:50.745 05:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:51.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:51.677 05:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:51.934 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:54:51.935 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:54:52.192 true 00:54:52.192 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:52.192 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:52.449 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:52.706 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:54:52.706 05:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:54:52.963 true 00:54:52.963 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:52.963 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:53.220 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:53.477 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:54:53.478 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:54:53.734 true 00:54:53.991 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:53.991 05:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:54.921 05:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:55.177 05:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:54:55.177 05:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:54:55.434 true 00:54:55.434 05:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:55.434 05:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:55.691 05:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:55.948 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:54:55.948 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:54:56.204 true 00:54:56.204 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:56.204 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:56.461 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:56.717 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:54:56.717 05:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:54:56.974 true 00:54:56.974 05:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:56.974 05:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:57.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:54:57.904 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:58.161 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:54:58.161 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:54:58.419 true 00:54:58.419 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:58.419 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:58.697 05:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:58.954 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:54:58.954 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:54:59.211 true 00:54:59.211 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:54:59.211 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:54:59.775 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:59.775 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:54:59.775 05:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:55:00.033 true 00:55:00.033 05:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:55:00.033 05:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:01.433 05:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:01.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:55:01.433 05:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:55:01.433 05:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:55:01.690 true 00:55:01.690 05:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:55:01.690 05:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:01.947 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:02.204 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:55:02.204 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:55:02.461 true 00:55:02.461 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:55:02.461 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:02.718 05:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:02.975 05:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:55:02.975 05:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:55:03.233 true 00:55:03.233 05:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:55:03.233 05:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:04.163 Initializing NVMe Controllers 00:55:04.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:55:04.163 Controller IO queue size 128, less than required. 00:55:04.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:04.164 Controller IO queue size 128, less than required. 00:55:04.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:04.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:55:04.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:55:04.164 Initialization complete. Launching workers. 00:55:04.164 ======================================================== 00:55:04.164 Latency(us) 00:55:04.164 Device Information : IOPS MiB/s Average min max 00:55:04.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 598.43 0.29 87253.37 3139.33 1015427.44 00:55:04.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8234.80 4.02 15543.54 2652.08 544331.42 00:55:04.164 ======================================================== 00:55:04.164 Total : 8833.23 4.31 20401.73 2652.08 1015427.44 00:55:04.164 00:55:04.420 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:04.677 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:55:04.677 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:55:04.934 true 00:55:04.934 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 769399 00:55:04.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (769399) - No such process 00:55:04.934 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 769399 00:55:04.934 05:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:05.191 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:05.449 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:55:05.449 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:55:05.449 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:55:05.449 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:05.449 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:55:05.706 null0 00:55:05.706 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:05.707 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:05.707 05:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:55:05.964 null1 00:55:05.964 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:05.964 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:05.964 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:55:06.221 null2 00:55:06.221 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:06.221 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:06.221 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:55:06.477 null3 00:55:06.477 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:06.477 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:06.477 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:55:06.734 null4 00:55:06.734 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:06.734 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:06.734 05:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:55:06.991 null5 00:55:06.991 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:06.991 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:06.991 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:55:07.247 null6 00:55:07.247 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:07.247 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:07.247 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:55:07.505 null7 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.505 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 773411 773412 773414 773416 773418 773420 773422 773424 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:07.506 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:07.763 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:08.021 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:08.021 05:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:08.021 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:08.021 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:08.021 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:08.021 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:08.021 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.280 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:08.538 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:08.796 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:08.797 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:08.797 05:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:09.055 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.313 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.314 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:09.314 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:09.314 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:09.314 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:09.571 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:09.572 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:09.572 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:09.572 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:09.572 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:09.572 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:09.829 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:09.829 05:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.087 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.088 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:10.345 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:10.603 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:10.861 05:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:10.861 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.128 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:11.389 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:11.954 05:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:11.954 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:11.954 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:12.212 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.470 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.471 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:12.729 05:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.987 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:55:12.988 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:12.988 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:12.988 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:13.246 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:13.504 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:13.762 rmmod nvme_tcp 00:55:13.762 rmmod nvme_fabrics 00:55:13.762 rmmod nvme_keyring 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 768993 ']' 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 768993 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 768993 ']' 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 768993 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768993 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768993' 00:55:13.762 killing process with pid 768993 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 768993 00:55:13.762 05:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 768993 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:14.022 05:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:15.922 00:55:15.922 real 0m47.763s 00:55:15.922 user 3m19.982s 00:55:15.922 sys 0m21.680s 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:55:15.922 ************************************ 00:55:15.922 END TEST nvmf_ns_hotplug_stress 00:55:15.922 ************************************ 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:15.922 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:55:16.180 ************************************ 00:55:16.180 START TEST nvmf_delete_subsystem 00:55:16.180 ************************************ 00:55:16.180 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:55:16.180 * Looking for test storage... 00:55:16.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:16.180 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:55:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:16.181 --rc genhtml_branch_coverage=1 00:55:16.181 --rc genhtml_function_coverage=1 00:55:16.181 --rc genhtml_legend=1 00:55:16.181 --rc geninfo_all_blocks=1 00:55:16.181 --rc geninfo_unexecuted_blocks=1 00:55:16.181 00:55:16.181 ' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:55:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:16.181 --rc genhtml_branch_coverage=1 00:55:16.181 --rc genhtml_function_coverage=1 00:55:16.181 --rc genhtml_legend=1 00:55:16.181 --rc geninfo_all_blocks=1 00:55:16.181 --rc geninfo_unexecuted_blocks=1 00:55:16.181 00:55:16.181 ' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:55:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:16.181 --rc genhtml_branch_coverage=1 00:55:16.181 --rc genhtml_function_coverage=1 00:55:16.181 --rc genhtml_legend=1 00:55:16.181 --rc geninfo_all_blocks=1 00:55:16.181 --rc geninfo_unexecuted_blocks=1 00:55:16.181 00:55:16.181 ' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:55:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:16.181 --rc genhtml_branch_coverage=1 00:55:16.181 --rc genhtml_function_coverage=1 00:55:16.181 --rc genhtml_legend=1 00:55:16.181 --rc geninfo_all_blocks=1 00:55:16.181 --rc geninfo_unexecuted_blocks=1 00:55:16.181 00:55:16.181 ' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:16.181 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:55:16.182 05:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:55:18.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:55:18.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:55:18.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:55:18.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:18.709 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:18.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:18.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:55:18.710 00:55:18.710 --- 10.0.0.2 ping statistics --- 00:55:18.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:18.710 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:18.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:18.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:55:18.710 00:55:18.710 --- 10.0.0.1 ping statistics --- 00:55:18.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:18.710 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=776284 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 776284 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 776284 ']' 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:18.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.710 [2024-12-09 05:50:12.659980] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:55:18.710 [2024-12-09 05:50:12.661035] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:18.710 [2024-12-09 05:50:12.661106] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:18.710 [2024-12-09 05:50:12.732536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:55:18.710 [2024-12-09 05:50:12.785804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:18.710 [2024-12-09 05:50:12.785865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:18.710 [2024-12-09 05:50:12.785889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:18.710 [2024-12-09 05:50:12.785899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:18.710 [2024-12-09 05:50:12.785909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:18.710 [2024-12-09 05:50:12.787143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:18.710 [2024-12-09 05:50:12.787149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:18.710 [2024-12-09 05:50:12.870976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:55:18.710 [2024-12-09 05:50:12.871017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:55:18.710 [2024-12-09 05:50:12.871245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.710 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.710 [2024-12-09 05:50:12.927758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.969 [2024-12-09 05:50:12.947979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.969 NULL1 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.969 Delay0 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=776318 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:55:18.969 05:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:55:18.969 [2024-12-09 05:50:13.025963] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:20.866 05:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:20.866 05:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:20.866 05:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 Write completed with error (sct=0, sc=8) 00:55:21.123 starting I/O failed: -6 00:55:21.123 Write completed with error (sct=0, sc=8) 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 starting I/O failed: -6 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 Write completed with error (sct=0, sc=8) 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.123 starting I/O failed: -6 00:55:21.123 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 [2024-12-09 05:50:15.162652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc4c00d4b0 is same with the state(6) to be set 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Write completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 Read completed with error (sct=0, sc=8) 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:21.124 starting I/O failed: -6 00:55:22.058 [2024-12-09 05:50:16.123561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b9b0 is same with the state(6) to be set 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 [2024-12-09 05:50:16.164813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc4c00d7e0 is same with the state(6) to be set 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 [2024-12-09 05:50:16.164978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc4c00d020 is same with the state(6) to be set 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Write completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.058 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 [2024-12-09 05:50:16.165446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a680 is same with the state(6) to be set 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Write completed with error (sct=0, sc=8) 00:55:22.059 Read completed with error (sct=0, sc=8) 00:55:22.059 [2024-12-09 05:50:16.166134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a2c0 is same with the state(6) to be set 00:55:22.059 Initializing NVMe Controllers 00:55:22.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:55:22.059 Controller IO queue size 128, less than required. 00:55:22.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:22.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:55:22.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:55:22.059 Initialization complete. Launching workers. 00:55:22.059 ======================================================== 00:55:22.059 Latency(us) 00:55:22.059 Device Information : IOPS MiB/s Average min max 00:55:22.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.52 0.09 916424.83 697.66 1012658.74 00:55:22.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.70 0.08 916805.94 536.89 1013067.46 00:55:22.059 ======================================================== 00:55:22.059 Total : 344.22 0.17 916602.75 536.89 1013067.46 00:55:22.059 00:55:22.059 [2024-12-09 05:50:16.166652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1b9b0 (9): Bad file descriptor 00:55:22.059 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:22.059 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:55:22.059 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 776318 00:55:22.059 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:55:22.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 776318 00:55:22.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (776318) - No such process 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 776318 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 776318 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 776318 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:22.626 [2024-12-09 05:50:16.687972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=776830 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:22.626 05:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:22.626 [2024-12-09 05:50:16.749925] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:55:23.192 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:23.192 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:23.192 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:23.756 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:23.756 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:23.756 05:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:24.012 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:24.012 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:24.012 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:24.575 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:24.575 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:24.575 05:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:25.138 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:25.138 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:25.138 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:25.702 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:25.702 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:25.702 05:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:55:25.702 Initializing NVMe Controllers 00:55:25.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:55:25.702 Controller IO queue size 128, less than required. 00:55:25.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:25.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:55:25.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:55:25.702 Initialization complete. Launching workers. 00:55:25.702 ======================================================== 00:55:25.702 Latency(us) 00:55:25.702 Device Information : IOPS MiB/s Average min max 00:55:25.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006004.91 1000214.23 1043823.52 00:55:25.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005209.20 1000184.32 1044186.67 00:55:25.702 ======================================================== 00:55:25.702 Total : 256.00 0.12 1005607.06 1000184.32 1044186.67 00:55:25.702 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 776830 00:55:26.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (776830) - No such process 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 776830 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:26.266 rmmod nvme_tcp 00:55:26.266 rmmod nvme_fabrics 00:55:26.266 rmmod nvme_keyring 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 776284 ']' 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 776284 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 776284 ']' 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 776284 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776284 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776284' 00:55:26.266 killing process with pid 776284 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 776284 00:55:26.266 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 776284 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:26.524 05:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:28.424 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:28.424 00:55:28.424 real 0m12.467s 00:55:28.424 user 0m24.846s 00:55:28.424 sys 0m3.689s 00:55:28.424 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:28.424 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:55:28.424 ************************************ 00:55:28.424 END TEST nvmf_delete_subsystem 00:55:28.424 ************************************ 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:55:28.683 ************************************ 00:55:28.683 START TEST nvmf_host_management 00:55:28.683 ************************************ 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:55:28.683 * Looking for test storage... 00:55:28.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:28.683 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:55:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:28.683 --rc genhtml_branch_coverage=1 00:55:28.684 --rc genhtml_function_coverage=1 00:55:28.684 --rc genhtml_legend=1 00:55:28.684 --rc geninfo_all_blocks=1 00:55:28.684 --rc geninfo_unexecuted_blocks=1 00:55:28.684 00:55:28.684 ' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:55:28.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:28.684 --rc genhtml_branch_coverage=1 00:55:28.684 --rc genhtml_function_coverage=1 00:55:28.684 --rc genhtml_legend=1 00:55:28.684 --rc geninfo_all_blocks=1 00:55:28.684 --rc geninfo_unexecuted_blocks=1 00:55:28.684 00:55:28.684 ' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:55:28.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:28.684 --rc genhtml_branch_coverage=1 00:55:28.684 --rc genhtml_function_coverage=1 00:55:28.684 --rc genhtml_legend=1 00:55:28.684 --rc geninfo_all_blocks=1 00:55:28.684 --rc geninfo_unexecuted_blocks=1 00:55:28.684 00:55:28.684 ' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:55:28.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:28.684 --rc genhtml_branch_coverage=1 00:55:28.684 --rc genhtml_function_coverage=1 00:55:28.684 --rc genhtml_legend=1 00:55:28.684 --rc geninfo_all_blocks=1 00:55:28.684 --rc geninfo_unexecuted_blocks=1 00:55:28.684 00:55:28.684 ' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:55:28.684 05:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:55:31.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:55:31.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:55:31.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:31.222 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:55:31.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:31.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:31.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:55:31.223 00:55:31.223 --- 10.0.0.2 ping statistics --- 00:55:31.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:31.223 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:31.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:31.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:55:31.223 00:55:31.223 --- 10.0.0.1 ping statistics --- 00:55:31.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:31.223 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=779170 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 779170 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 779170 ']' 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:31.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:31.223 05:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.223 [2024-12-09 05:50:25.039603] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:55:31.223 [2024-12-09 05:50:25.040643] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:31.223 [2024-12-09 05:50:25.040708] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:31.223 [2024-12-09 05:50:25.114424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:31.223 [2024-12-09 05:50:25.172711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:31.223 [2024-12-09 05:50:25.172765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:31.223 [2024-12-09 05:50:25.172788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:31.223 [2024-12-09 05:50:25.172799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:31.223 [2024-12-09 05:50:25.172809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:31.223 [2024-12-09 05:50:25.174325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:31.223 [2024-12-09 05:50:25.174394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:55:31.223 [2024-12-09 05:50:25.174461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:55:31.223 [2024-12-09 05:50:25.174465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:31.223 [2024-12-09 05:50:25.267762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:55:31.223 [2024-12-09 05:50:25.267966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:55:31.223 [2024-12-09 05:50:25.268267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:55:31.223 [2024-12-09 05:50:25.268911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:55:31.223 [2024-12-09 05:50:25.269120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:31.223 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.224 [2024-12-09 05:50:25.319161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.224 Malloc0 00:55:31.224 [2024-12-09 05:50:25.395374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=779219 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 779219 /var/tmp/bdevperf.sock 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 779219 ']' 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:31.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:31.224 { 00:55:31.224 "params": { 00:55:31.224 "name": "Nvme$subsystem", 00:55:31.224 "trtype": "$TEST_TRANSPORT", 00:55:31.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:31.224 "adrfam": "ipv4", 00:55:31.224 "trsvcid": "$NVMF_PORT", 00:55:31.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:31.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:31.224 "hdgst": ${hdgst:-false}, 00:55:31.224 "ddgst": ${ddgst:-false} 00:55:31.224 }, 00:55:31.224 "method": "bdev_nvme_attach_controller" 00:55:31.224 } 00:55:31.224 EOF 00:55:31.224 )") 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:55:31.224 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:31.224 "params": { 00:55:31.224 "name": "Nvme0", 00:55:31.224 "trtype": "tcp", 00:55:31.224 "traddr": "10.0.0.2", 00:55:31.224 "adrfam": "ipv4", 00:55:31.224 "trsvcid": "4420", 00:55:31.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:31.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:31.224 "hdgst": false, 00:55:31.224 "ddgst": false 00:55:31.224 }, 00:55:31.224 "method": "bdev_nvme_attach_controller" 00:55:31.224 }' 00:55:31.480 [2024-12-09 05:50:25.479483] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:31.480 [2024-12-09 05:50:25.479573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779219 ] 00:55:31.480 [2024-12-09 05:50:25.549808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:31.480 [2024-12-09 05:50:25.610169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:31.737 Running I/O for 10 seconds... 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:31.737 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:55:31.738 05:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=477 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 477 -ge 100 ']' 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.996 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.996 [2024-12-09 05:50:26.195187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.996 [2024-12-09 05:50:26.195444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 [2024-12-09 05:50:26.195856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb3a0 is same with the state(6) to be set 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:31.997 [2024-12-09 05:50:26.205698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:31.997 [2024-12-09 05:50:26.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.205768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:31.997 [2024-12-09 05:50:26.205782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.205796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:31.997 [2024-12-09 05:50:26.205811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.205824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:31.997 [2024-12-09 05:50:26.205838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.205851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d82a50 is same with the state(6) to be set 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:31.997 05:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:55:31.997 [2024-12-09 05:50:26.216864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d82a50 (9): Bad file descriptor 00:55:31.997 [2024-12-09 05:50:26.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.997 [2024-12-09 05:50:26.217473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.997 [2024-12-09 05:50:26.217487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.217974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.217987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.998 [2024-12-09 05:50:26.218655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.998 [2024-12-09 05:50:26.218670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:31.999 [2024-12-09 05:50:26.218888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:31.999 [2024-12-09 05:50:26.218901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:32.256 [2024-12-09 05:50:26.220101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:55:32.256 task offset: 73600 on job bdev=Nvme0n1 fails 00:55:32.256 00:55:32.256 Latency(us) 00:55:32.256 [2024-12-09T04:50:26.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:32.256 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:55:32.256 Job: Nvme0n1 ended in about 0.43 seconds with error 00:55:32.256 Verification LBA range: start 0x0 length 0x400 00:55:32.256 Nvme0n1 : 0.43 1338.61 83.66 148.99 0.00 41792.52 2463.67 34952.53 00:55:32.256 [2024-12-09T04:50:26.481Z] =================================================================================================================== 00:55:32.256 [2024-12-09T04:50:26.481Z] Total : 1338.61 83.66 148.99 0.00 41792.52 2463.67 34952.53 00:55:32.256 [2024-12-09 05:50:26.221987] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:55:32.256 [2024-12-09 05:50:26.266748] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 779219 00:55:33.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (779219) - No such process 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:33.295 { 00:55:33.295 "params": { 00:55:33.295 "name": "Nvme$subsystem", 00:55:33.295 "trtype": "$TEST_TRANSPORT", 00:55:33.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:33.295 "adrfam": "ipv4", 00:55:33.295 "trsvcid": "$NVMF_PORT", 00:55:33.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:33.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:33.295 "hdgst": ${hdgst:-false}, 00:55:33.295 "ddgst": ${ddgst:-false} 00:55:33.295 }, 00:55:33.295 "method": "bdev_nvme_attach_controller" 00:55:33.295 } 00:55:33.295 EOF 00:55:33.295 )") 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:55:33.295 05:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:33.295 "params": { 00:55:33.295 "name": "Nvme0", 00:55:33.295 "trtype": "tcp", 00:55:33.295 "traddr": "10.0.0.2", 00:55:33.295 "adrfam": "ipv4", 00:55:33.295 "trsvcid": "4420", 00:55:33.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:33.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:33.295 "hdgst": false, 00:55:33.295 "ddgst": false 00:55:33.295 }, 00:55:33.295 "method": "bdev_nvme_attach_controller" 00:55:33.295 }' 00:55:33.295 [2024-12-09 05:50:27.259794] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:33.295 [2024-12-09 05:50:27.259878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779500 ] 00:55:33.295 [2024-12-09 05:50:27.328202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:33.295 [2024-12-09 05:50:27.389854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:33.580 Running I/O for 1 seconds... 00:55:34.509 1664.00 IOPS, 104.00 MiB/s 00:55:34.509 Latency(us) 00:55:34.509 [2024-12-09T04:50:28.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:34.509 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:55:34.509 Verification LBA range: start 0x0 length 0x400 00:55:34.509 Nvme0n1 : 1.02 1686.30 105.39 0.00 0.00 37339.06 4441.88 33399.09 00:55:34.509 [2024-12-09T04:50:28.734Z] =================================================================================================================== 00:55:34.509 [2024-12-09T04:50:28.734Z] Total : 1686.30 105.39 0.00 0.00 37339.06 4441.88 33399.09 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:34.766 rmmod nvme_tcp 00:55:34.766 rmmod nvme_fabrics 00:55:34.766 rmmod nvme_keyring 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 779170 ']' 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 779170 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 779170 ']' 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 779170 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:34.766 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779170 00:55:35.023 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:35.023 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:35.023 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779170' 00:55:35.023 killing process with pid 779170 00:55:35.023 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 779170 00:55:35.023 05:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 779170 00:55:35.282 [2024-12-09 05:50:29.249121] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:35.282 05:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:55:37.186 00:55:37.186 real 0m8.639s 00:55:37.186 user 0m17.335s 00:55:37.186 sys 0m3.540s 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:55:37.186 ************************************ 00:55:37.186 END TEST nvmf_host_management 00:55:37.186 ************************************ 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:55:37.186 ************************************ 00:55:37.186 START TEST nvmf_lvol 00:55:37.186 ************************************ 00:55:37.186 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:55:37.445 * Looking for test storage... 00:55:37.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:55:37.445 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:55:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:37.446 --rc genhtml_branch_coverage=1 00:55:37.446 --rc genhtml_function_coverage=1 00:55:37.446 --rc genhtml_legend=1 00:55:37.446 --rc geninfo_all_blocks=1 00:55:37.446 --rc geninfo_unexecuted_blocks=1 00:55:37.446 00:55:37.446 ' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:55:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:37.446 --rc genhtml_branch_coverage=1 00:55:37.446 --rc genhtml_function_coverage=1 00:55:37.446 --rc genhtml_legend=1 00:55:37.446 --rc geninfo_all_blocks=1 00:55:37.446 --rc geninfo_unexecuted_blocks=1 00:55:37.446 00:55:37.446 ' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:55:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:37.446 --rc genhtml_branch_coverage=1 00:55:37.446 --rc genhtml_function_coverage=1 00:55:37.446 --rc genhtml_legend=1 00:55:37.446 --rc geninfo_all_blocks=1 00:55:37.446 --rc geninfo_unexecuted_blocks=1 00:55:37.446 00:55:37.446 ' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:55:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:37.446 --rc genhtml_branch_coverage=1 00:55:37.446 --rc genhtml_function_coverage=1 00:55:37.446 --rc genhtml_legend=1 00:55:37.446 --rc geninfo_all_blocks=1 00:55:37.446 --rc geninfo_unexecuted_blocks=1 00:55:37.446 00:55:37.446 ' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:55:37.446 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:55:37.447 05:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:55:39.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:55:39.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:39.977 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:55:39.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:55:39.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:39.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:39.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:55:39.978 00:55:39.978 --- 10.0.0.2 ping statistics --- 00:55:39.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:39.978 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:39.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:39.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:55:39.978 00:55:39.978 --- 10.0.0.1 ping statistics --- 00:55:39.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:39.978 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=781706 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 781706 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 781706 ']' 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:39.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:39.978 05:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:39.978 [2024-12-09 05:50:33.905857] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:55:39.978 [2024-12-09 05:50:33.906898] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:39.978 [2024-12-09 05:50:33.906953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:39.978 [2024-12-09 05:50:33.977700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:55:39.978 [2024-12-09 05:50:34.034105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:39.978 [2024-12-09 05:50:34.034154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:39.978 [2024-12-09 05:50:34.034182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:39.978 [2024-12-09 05:50:34.034194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:39.978 [2024-12-09 05:50:34.034204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:39.978 [2024-12-09 05:50:34.035544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:39.978 [2024-12-09 05:50:34.035668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:39.978 [2024-12-09 05:50:34.035672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:39.978 [2024-12-09 05:50:34.123209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:55:39.978 [2024-12-09 05:50:34.123446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:55:39.978 [2024-12-09 05:50:34.123462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:55:39.978 [2024-12-09 05:50:34.123714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:55:39.978 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:39.978 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:55:39.978 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:39.978 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:39.978 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:39.979 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:39.979 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:40.236 [2024-12-09 05:50:34.412335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:40.236 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:55:40.803 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:55:40.803 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:55:40.803 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:55:40.803 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:55:41.369 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:55:41.627 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8b600ef9-aaf2-4b7f-9588-c9ccf25d6222 00:55:41.627 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b600ef9-aaf2-4b7f-9588-c9ccf25d6222 lvol 20 00:55:41.885 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cd776db2-0b2a-443e-b09b-4f363e0187d1 00:55:41.885 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:55:42.143 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cd776db2-0b2a-443e-b09b-4f363e0187d1 00:55:42.400 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:42.658 [2024-12-09 05:50:36.660525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:42.658 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:55:42.915 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=782123 00:55:42.915 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:55:42.915 05:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:55:43.846 05:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cd776db2-0b2a-443e-b09b-4f363e0187d1 MY_SNAPSHOT 00:55:44.104 05:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f28ee0f9-4134-4fe6-8168-b219ef356fc0 00:55:44.104 05:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cd776db2-0b2a-443e-b09b-4f363e0187d1 30 00:55:44.668 05:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f28ee0f9-4134-4fe6-8168-b219ef356fc0 MY_CLONE 00:55:44.926 05:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d1dc53f2-067c-4acd-8082-fb98792b2824 00:55:44.926 05:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d1dc53f2-067c-4acd-8082-fb98792b2824 00:55:45.491 05:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 782123 00:55:53.599 Initializing NVMe Controllers 00:55:53.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:55:53.599 Controller IO queue size 128, less than required. 00:55:53.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:55:53.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:55:53.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:55:53.599 Initialization complete. Launching workers. 00:55:53.599 ======================================================== 00:55:53.599 Latency(us) 00:55:53.599 Device Information : IOPS MiB/s Average min max 00:55:53.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10263.80 40.09 12483.18 4662.51 60661.52 00:55:53.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10377.50 40.54 12344.18 5982.77 105235.18 00:55:53.599 ======================================================== 00:55:53.599 Total : 20641.30 80.63 12413.30 4662.51 105235.18 00:55:53.599 00:55:53.599 05:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:53.599 05:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cd776db2-0b2a-443e-b09b-4f363e0187d1 00:55:53.856 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b600ef9-aaf2-4b7f-9588-c9ccf25d6222 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:55:54.114 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:54.115 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:54.115 rmmod nvme_tcp 00:55:54.372 rmmod nvme_fabrics 00:55:54.372 rmmod nvme_keyring 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 781706 ']' 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 781706 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 781706 ']' 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 781706 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781706 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781706' 00:55:54.372 killing process with pid 781706 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 781706 00:55:54.372 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 781706 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:54.630 05:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:55:57.160 00:55:57.160 real 0m19.410s 00:55:57.160 user 0m57.138s 00:55:57.160 sys 0m7.681s 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:55:57.160 ************************************ 00:55:57.160 END TEST nvmf_lvol 00:55:57.160 ************************************ 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:55:57.160 ************************************ 00:55:57.160 START TEST nvmf_lvs_grow 00:55:57.160 ************************************ 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:55:57.160 * Looking for test storage... 00:55:57.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:55:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:57.160 --rc genhtml_branch_coverage=1 00:55:57.160 --rc genhtml_function_coverage=1 00:55:57.160 --rc genhtml_legend=1 00:55:57.160 --rc geninfo_all_blocks=1 00:55:57.160 --rc geninfo_unexecuted_blocks=1 00:55:57.160 00:55:57.160 ' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:55:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:57.160 --rc genhtml_branch_coverage=1 00:55:57.160 --rc genhtml_function_coverage=1 00:55:57.160 --rc genhtml_legend=1 00:55:57.160 --rc geninfo_all_blocks=1 00:55:57.160 --rc geninfo_unexecuted_blocks=1 00:55:57.160 00:55:57.160 ' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:55:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:57.160 --rc genhtml_branch_coverage=1 00:55:57.160 --rc genhtml_function_coverage=1 00:55:57.160 --rc genhtml_legend=1 00:55:57.160 --rc geninfo_all_blocks=1 00:55:57.160 --rc geninfo_unexecuted_blocks=1 00:55:57.160 00:55:57.160 ' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:55:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:57.160 --rc genhtml_branch_coverage=1 00:55:57.160 --rc genhtml_function_coverage=1 00:55:57.160 --rc genhtml_legend=1 00:55:57.160 --rc geninfo_all_blocks=1 00:55:57.160 --rc geninfo_unexecuted_blocks=1 00:55:57.160 00:55:57.160 ' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:57.160 05:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:57.160 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:55:57.161 05:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:55:59.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:55:59.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:55:59.059 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:55:59.060 Found net devices under 0000:0a:00.0: cvl_0_0 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:55:59.060 Found net devices under 0000:0a:00.1: cvl_0_1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:55:59.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:59.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:55:59.060 00:55:59.060 --- 10.0.0.2 ping statistics --- 00:55:59.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.060 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:59.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:59.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:55:59.060 00:55:59.060 --- 10.0.0.1 ping statistics --- 00:55:59.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:59.060 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:59.060 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=785400 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 785400 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 785400 ']' 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:59.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:59.317 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:59.317 [2024-12-09 05:50:53.351599] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:55:59.317 [2024-12-09 05:50:53.352730] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:55:59.317 [2024-12-09 05:50:53.352798] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:59.317 [2024-12-09 05:50:53.423673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:59.317 [2024-12-09 05:50:53.480427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:59.317 [2024-12-09 05:50:53.480478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:59.317 [2024-12-09 05:50:53.480493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:59.317 [2024-12-09 05:50:53.480506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:59.317 [2024-12-09 05:50:53.480516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:59.317 [2024-12-09 05:50:53.481123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:59.574 [2024-12-09 05:50:53.569229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:55:59.574 [2024-12-09 05:50:53.569549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:59.574 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:59.832 [2024-12-09 05:50:53.873698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:59.832 ************************************ 00:55:59.832 START TEST lvs_grow_clean 00:55:59.832 ************************************ 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:55:59.832 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:56:00.089 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:56:00.089 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:56:00.346 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:00.346 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:00.346 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:56:00.603 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:56:00.603 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:56:00.603 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a02aaf34-b37c-4f61-8109-99582e7eb030 lvol 150 00:56:00.861 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=31aca526-e2d0-4917-894a-398db0254678 00:56:00.861 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:00.861 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:56:01.117 [2024-12-09 05:50:55.309615] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:56:01.117 [2024-12-09 05:50:55.309737] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:56:01.117 true 00:56:01.117 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:01.117 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:56:01.681 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:56:01.681 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:56:01.681 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 31aca526-e2d0-4917-894a-398db0254678 00:56:02.247 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:56:02.247 [2024-12-09 05:50:56.429917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:02.248 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=785836 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 785836 /var/tmp/bdevperf.sock 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 785836 ']' 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:56:02.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:02.506 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:56:02.764 [2024-12-09 05:50:56.766947] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:02.764 [2024-12-09 05:50:56.767053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785836 ] 00:56:02.764 [2024-12-09 05:50:56.836438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:02.764 [2024-12-09 05:50:56.898580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:03.021 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:03.021 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:56:03.021 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:56:03.279 Nvme0n1 00:56:03.279 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:56:03.535 [ 00:56:03.535 { 00:56:03.535 "name": "Nvme0n1", 00:56:03.535 "aliases": [ 00:56:03.535 "31aca526-e2d0-4917-894a-398db0254678" 00:56:03.535 ], 00:56:03.535 "product_name": "NVMe disk", 00:56:03.535 "block_size": 4096, 00:56:03.535 "num_blocks": 38912, 00:56:03.535 "uuid": "31aca526-e2d0-4917-894a-398db0254678", 00:56:03.535 "numa_id": 0, 00:56:03.535 "assigned_rate_limits": { 00:56:03.535 "rw_ios_per_sec": 0, 00:56:03.535 "rw_mbytes_per_sec": 0, 00:56:03.535 "r_mbytes_per_sec": 0, 00:56:03.535 "w_mbytes_per_sec": 0 00:56:03.535 }, 00:56:03.535 "claimed": false, 00:56:03.535 "zoned": false, 00:56:03.535 "supported_io_types": { 00:56:03.535 "read": true, 00:56:03.535 "write": true, 00:56:03.535 "unmap": true, 00:56:03.535 "flush": true, 00:56:03.535 "reset": true, 00:56:03.535 "nvme_admin": true, 00:56:03.535 "nvme_io": true, 00:56:03.535 "nvme_io_md": false, 00:56:03.535 "write_zeroes": true, 00:56:03.535 "zcopy": false, 00:56:03.535 "get_zone_info": false, 00:56:03.535 "zone_management": false, 00:56:03.535 "zone_append": false, 00:56:03.535 "compare": true, 00:56:03.535 "compare_and_write": true, 00:56:03.535 "abort": true, 00:56:03.535 "seek_hole": false, 00:56:03.535 "seek_data": false, 00:56:03.536 "copy": true, 00:56:03.536 "nvme_iov_md": false 00:56:03.536 }, 00:56:03.536 "memory_domains": [ 00:56:03.536 { 00:56:03.536 "dma_device_id": "system", 00:56:03.536 "dma_device_type": 1 00:56:03.536 } 00:56:03.536 ], 00:56:03.536 "driver_specific": { 00:56:03.536 "nvme": [ 00:56:03.536 { 00:56:03.536 "trid": { 00:56:03.536 "trtype": "TCP", 00:56:03.536 "adrfam": "IPv4", 00:56:03.536 "traddr": "10.0.0.2", 00:56:03.536 "trsvcid": "4420", 00:56:03.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:56:03.536 }, 00:56:03.536 "ctrlr_data": { 00:56:03.536 "cntlid": 1, 00:56:03.536 "vendor_id": "0x8086", 00:56:03.536 "model_number": "SPDK bdev Controller", 00:56:03.536 "serial_number": "SPDK0", 00:56:03.536 "firmware_revision": "25.01", 00:56:03.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:03.536 "oacs": { 00:56:03.536 "security": 0, 00:56:03.536 "format": 0, 00:56:03.536 "firmware": 0, 00:56:03.536 "ns_manage": 0 00:56:03.536 }, 00:56:03.536 "multi_ctrlr": true, 00:56:03.536 "ana_reporting": false 00:56:03.536 }, 00:56:03.536 "vs": { 00:56:03.536 "nvme_version": "1.3" 00:56:03.536 }, 00:56:03.536 "ns_data": { 00:56:03.536 "id": 1, 00:56:03.536 "can_share": true 00:56:03.536 } 00:56:03.536 } 00:56:03.536 ], 00:56:03.536 "mp_policy": "active_passive" 00:56:03.536 } 00:56:03.536 } 00:56:03.536 ] 00:56:03.793 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=785973 00:56:03.793 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:56:03.793 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:56:03.793 Running I/O for 10 seconds... 00:56:04.741 Latency(us) 00:56:04.741 [2024-12-09T04:50:58.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:04.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:04.741 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:56:04.741 [2024-12-09T04:50:58.966Z] =================================================================================================================== 00:56:04.741 [2024-12-09T04:50:58.966Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:56:04.741 00:56:05.675 05:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:05.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:05.675 Nvme0n1 : 2.00 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:56:05.675 [2024-12-09T04:50:59.900Z] =================================================================================================================== 00:56:05.675 [2024-12-09T04:50:59.900Z] Total : 14922.50 58.29 0.00 0.00 0.00 0.00 0.00 00:56:05.675 00:56:05.933 true 00:56:05.933 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:05.933 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:56:06.192 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:56:06.192 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:56:06.192 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 785973 00:56:06.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:06.758 Nvme0n1 : 3.00 14922.67 58.29 0.00 0.00 0.00 0.00 0.00 00:56:06.758 [2024-12-09T04:51:00.983Z] =================================================================================================================== 00:56:06.758 [2024-12-09T04:51:00.983Z] Total : 14922.67 58.29 0.00 0.00 0.00 0.00 0.00 00:56:06.758 00:56:07.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:07.692 Nvme0n1 : 4.00 14931.00 58.32 0.00 0.00 0.00 0.00 0.00 00:56:07.692 [2024-12-09T04:51:01.917Z] =================================================================================================================== 00:56:07.692 [2024-12-09T04:51:01.917Z] Total : 14931.00 58.32 0.00 0.00 0.00 0.00 0.00 00:56:07.692 00:56:09.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:09.066 Nvme0n1 : 5.00 14992.80 58.57 0.00 0.00 0.00 0.00 0.00 00:56:09.067 [2024-12-09T04:51:03.292Z] =================================================================================================================== 00:56:09.067 [2024-12-09T04:51:03.292Z] Total : 14992.80 58.57 0.00 0.00 0.00 0.00 0.00 00:56:09.067 00:56:10.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:10.002 Nvme0n1 : 6.00 15055.17 58.81 0.00 0.00 0.00 0.00 0.00 00:56:10.002 [2024-12-09T04:51:04.227Z] =================================================================================================================== 00:56:10.002 [2024-12-09T04:51:04.227Z] Total : 15055.17 58.81 0.00 0.00 0.00 0.00 0.00 00:56:10.002 00:56:10.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:10.935 Nvme0n1 : 7.00 15099.71 58.98 0.00 0.00 0.00 0.00 0.00 00:56:10.935 [2024-12-09T04:51:05.160Z] =================================================================================================================== 00:56:10.935 [2024-12-09T04:51:05.161Z] Total : 15099.71 58.98 0.00 0.00 0.00 0.00 0.00 00:56:10.936 00:56:11.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:11.869 Nvme0n1 : 8.00 15149.00 59.18 0.00 0.00 0.00 0.00 0.00 00:56:11.869 [2024-12-09T04:51:06.094Z] =================================================================================================================== 00:56:11.869 [2024-12-09T04:51:06.094Z] Total : 15149.00 59.18 0.00 0.00 0.00 0.00 0.00 00:56:11.869 00:56:12.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:12.801 Nvme0n1 : 9.00 15187.33 59.33 0.00 0.00 0.00 0.00 0.00 00:56:12.801 [2024-12-09T04:51:07.026Z] =================================================================================================================== 00:56:12.801 [2024-12-09T04:51:07.026Z] Total : 15187.33 59.33 0.00 0.00 0.00 0.00 0.00 00:56:12.801 00:56:13.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:13.784 Nvme0n1 : 10.00 15205.30 59.40 0.00 0.00 0.00 0.00 0.00 00:56:13.784 [2024-12-09T04:51:08.009Z] =================================================================================================================== 00:56:13.784 [2024-12-09T04:51:08.009Z] Total : 15205.30 59.40 0.00 0.00 0.00 0.00 0.00 00:56:13.784 00:56:13.784 00:56:13.784 Latency(us) 00:56:13.784 [2024-12-09T04:51:08.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:13.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:13.784 Nvme0n1 : 10.00 15210.84 59.42 0.00 0.00 8410.69 4369.07 18447.17 00:56:13.784 [2024-12-09T04:51:08.009Z] =================================================================================================================== 00:56:13.784 [2024-12-09T04:51:08.009Z] Total : 15210.84 59.42 0.00 0.00 8410.69 4369.07 18447.17 00:56:13.784 { 00:56:13.784 "results": [ 00:56:13.784 { 00:56:13.784 "job": "Nvme0n1", 00:56:13.784 "core_mask": "0x2", 00:56:13.784 "workload": "randwrite", 00:56:13.784 "status": "finished", 00:56:13.784 "queue_depth": 128, 00:56:13.784 "io_size": 4096, 00:56:13.784 "runtime": 10.004776, 00:56:13.784 "iops": 15210.835305058305, 00:56:13.784 "mibps": 59.417325410384, 00:56:13.784 "io_failed": 0, 00:56:13.784 "io_timeout": 0, 00:56:13.784 "avg_latency_us": 8410.692899600306, 00:56:13.784 "min_latency_us": 4369.066666666667, 00:56:13.784 "max_latency_us": 18447.17037037037 00:56:13.784 } 00:56:13.784 ], 00:56:13.784 "core_count": 1 00:56:13.784 } 00:56:13.784 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 785836 00:56:13.784 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 785836 ']' 00:56:13.784 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 785836 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785836 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785836' 00:56:13.785 killing process with pid 785836 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 785836 00:56:13.785 Received shutdown signal, test time was about 10.000000 seconds 00:56:13.785 00:56:13.785 Latency(us) 00:56:13.785 [2024-12-09T04:51:08.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:13.785 [2024-12-09T04:51:08.010Z] =================================================================================================================== 00:56:13.785 [2024-12-09T04:51:08.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:13.785 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 785836 00:56:14.074 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:56:14.336 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:56:14.594 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:14.594 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:56:14.850 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:56:14.850 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:56:14.850 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:56:15.414 [2024-12-09 05:51:09.337657] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:56:15.414 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:15.414 request: 00:56:15.414 { 00:56:15.414 "uuid": "a02aaf34-b37c-4f61-8109-99582e7eb030", 00:56:15.414 "method": "bdev_lvol_get_lvstores", 00:56:15.414 "req_id": 1 00:56:15.414 } 00:56:15.414 Got JSON-RPC error response 00:56:15.414 response: 00:56:15.414 { 00:56:15.414 "code": -19, 00:56:15.414 "message": "No such device" 00:56:15.414 } 00:56:15.671 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:56:15.671 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:15.671 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:15.671 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:15.671 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:56:15.928 aio_bdev 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 31aca526-e2d0-4917-894a-398db0254678 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=31aca526-e2d0-4917-894a-398db0254678 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:56:15.928 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:56:16.185 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 31aca526-e2d0-4917-894a-398db0254678 -t 2000 00:56:16.443 [ 00:56:16.443 { 00:56:16.443 "name": "31aca526-e2d0-4917-894a-398db0254678", 00:56:16.443 "aliases": [ 00:56:16.443 "lvs/lvol" 00:56:16.443 ], 00:56:16.443 "product_name": "Logical Volume", 00:56:16.443 "block_size": 4096, 00:56:16.443 "num_blocks": 38912, 00:56:16.443 "uuid": "31aca526-e2d0-4917-894a-398db0254678", 00:56:16.443 "assigned_rate_limits": { 00:56:16.443 "rw_ios_per_sec": 0, 00:56:16.443 "rw_mbytes_per_sec": 0, 00:56:16.443 "r_mbytes_per_sec": 0, 00:56:16.443 "w_mbytes_per_sec": 0 00:56:16.443 }, 00:56:16.443 "claimed": false, 00:56:16.443 "zoned": false, 00:56:16.443 "supported_io_types": { 00:56:16.443 "read": true, 00:56:16.443 "write": true, 00:56:16.443 "unmap": true, 00:56:16.443 "flush": false, 00:56:16.443 "reset": true, 00:56:16.443 "nvme_admin": false, 00:56:16.443 "nvme_io": false, 00:56:16.443 "nvme_io_md": false, 00:56:16.443 "write_zeroes": true, 00:56:16.443 "zcopy": false, 00:56:16.443 "get_zone_info": false, 00:56:16.443 "zone_management": false, 00:56:16.443 "zone_append": false, 00:56:16.443 "compare": false, 00:56:16.443 "compare_and_write": false, 00:56:16.443 "abort": false, 00:56:16.443 "seek_hole": true, 00:56:16.443 "seek_data": true, 00:56:16.443 "copy": false, 00:56:16.443 "nvme_iov_md": false 00:56:16.443 }, 00:56:16.443 "driver_specific": { 00:56:16.443 "lvol": { 00:56:16.443 "lvol_store_uuid": "a02aaf34-b37c-4f61-8109-99582e7eb030", 00:56:16.443 "base_bdev": "aio_bdev", 00:56:16.443 "thin_provision": false, 00:56:16.443 "num_allocated_clusters": 38, 00:56:16.443 "snapshot": false, 00:56:16.443 "clone": false, 00:56:16.443 "esnap_clone": false 00:56:16.443 } 00:56:16.443 } 00:56:16.443 } 00:56:16.443 ] 00:56:16.443 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:56:16.443 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:16.443 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:56:16.701 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:56:16.701 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:16.701 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:56:16.958 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:56:16.958 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 31aca526-e2d0-4917-894a-398db0254678 00:56:17.216 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a02aaf34-b37c-4f61-8109-99582e7eb030 00:56:17.473 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:56:17.731 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:17.731 00:56:17.731 real 0m17.972s 00:56:17.732 user 0m17.540s 00:56:17.732 sys 0m1.861s 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:56:17.732 ************************************ 00:56:17.732 END TEST lvs_grow_clean 00:56:17.732 ************************************ 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:56:17.732 ************************************ 00:56:17.732 START TEST lvs_grow_dirty 00:56:17.732 ************************************ 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:17.732 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:56:18.300 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:56:18.300 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:56:18.300 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:18.300 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:18.300 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:56:18.558 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:56:18.558 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:56:18.558 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 lvol 150 00:56:19.124 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9547f614-7918-46da-942c-59c830660e12 00:56:19.124 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:19.124 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:56:19.124 [2024-12-09 05:51:13.309597] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:56:19.124 [2024-12-09 05:51:13.309703] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:56:19.124 true 00:56:19.124 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:19.124 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:56:19.382 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:56:19.382 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:56:19.640 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9547f614-7918-46da-942c-59c830660e12 00:56:19.898 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:56:20.156 [2024-12-09 05:51:14.365917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:20.414 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=788613 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 788613 /var/tmp/bdevperf.sock 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 788613 ']' 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:56:20.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:20.672 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:56:20.672 [2024-12-09 05:51:14.691300] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:20.672 [2024-12-09 05:51:14.691409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788613 ] 00:56:20.672 [2024-12-09 05:51:14.758622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:20.672 [2024-12-09 05:51:14.819497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:20.930 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:20.930 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:56:20.930 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:56:21.187 Nvme0n1 00:56:21.187 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:56:21.445 [ 00:56:21.445 { 00:56:21.445 "name": "Nvme0n1", 00:56:21.445 "aliases": [ 00:56:21.445 "9547f614-7918-46da-942c-59c830660e12" 00:56:21.445 ], 00:56:21.445 "product_name": "NVMe disk", 00:56:21.445 "block_size": 4096, 00:56:21.445 "num_blocks": 38912, 00:56:21.445 "uuid": "9547f614-7918-46da-942c-59c830660e12", 00:56:21.445 "numa_id": 0, 00:56:21.445 "assigned_rate_limits": { 00:56:21.445 "rw_ios_per_sec": 0, 00:56:21.445 "rw_mbytes_per_sec": 0, 00:56:21.445 "r_mbytes_per_sec": 0, 00:56:21.445 "w_mbytes_per_sec": 0 00:56:21.445 }, 00:56:21.445 "claimed": false, 00:56:21.445 "zoned": false, 00:56:21.445 "supported_io_types": { 00:56:21.445 "read": true, 00:56:21.445 "write": true, 00:56:21.445 "unmap": true, 00:56:21.445 "flush": true, 00:56:21.445 "reset": true, 00:56:21.445 "nvme_admin": true, 00:56:21.445 "nvme_io": true, 00:56:21.445 "nvme_io_md": false, 00:56:21.445 "write_zeroes": true, 00:56:21.445 "zcopy": false, 00:56:21.445 "get_zone_info": false, 00:56:21.445 "zone_management": false, 00:56:21.445 "zone_append": false, 00:56:21.445 "compare": true, 00:56:21.445 "compare_and_write": true, 00:56:21.445 "abort": true, 00:56:21.445 "seek_hole": false, 00:56:21.445 "seek_data": false, 00:56:21.445 "copy": true, 00:56:21.445 "nvme_iov_md": false 00:56:21.445 }, 00:56:21.445 "memory_domains": [ 00:56:21.445 { 00:56:21.445 "dma_device_id": "system", 00:56:21.445 "dma_device_type": 1 00:56:21.445 } 00:56:21.445 ], 00:56:21.445 "driver_specific": { 00:56:21.445 "nvme": [ 00:56:21.445 { 00:56:21.445 "trid": { 00:56:21.445 "trtype": "TCP", 00:56:21.445 "adrfam": "IPv4", 00:56:21.445 "traddr": "10.0.0.2", 00:56:21.445 "trsvcid": "4420", 00:56:21.445 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:56:21.445 }, 00:56:21.445 "ctrlr_data": { 00:56:21.445 "cntlid": 1, 00:56:21.445 "vendor_id": "0x8086", 00:56:21.445 "model_number": "SPDK bdev Controller", 00:56:21.445 "serial_number": "SPDK0", 00:56:21.445 "firmware_revision": "25.01", 00:56:21.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:56:21.445 "oacs": { 00:56:21.445 "security": 0, 00:56:21.445 "format": 0, 00:56:21.445 "firmware": 0, 00:56:21.445 "ns_manage": 0 00:56:21.445 }, 00:56:21.445 "multi_ctrlr": true, 00:56:21.445 "ana_reporting": false 00:56:21.445 }, 00:56:21.445 "vs": { 00:56:21.445 "nvme_version": "1.3" 00:56:21.445 }, 00:56:21.445 "ns_data": { 00:56:21.445 "id": 1, 00:56:21.445 "can_share": true 00:56:21.445 } 00:56:21.445 } 00:56:21.445 ], 00:56:21.445 "mp_policy": "active_passive" 00:56:21.445 } 00:56:21.445 } 00:56:21.445 ] 00:56:21.445 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=788748 00:56:21.445 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:56:21.445 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:56:21.702 Running I/O for 10 seconds... 00:56:22.635 Latency(us) 00:56:22.635 [2024-12-09T04:51:16.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:22.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:22.635 Nvme0n1 : 1.00 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:56:22.635 [2024-12-09T04:51:16.860Z] =================================================================================================================== 00:56:22.635 [2024-12-09T04:51:16.860Z] Total : 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:56:22.635 00:56:23.565 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:23.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:23.565 Nvme0n1 : 2.00 14414.50 56.31 0.00 0.00 0.00 0.00 0.00 00:56:23.565 [2024-12-09T04:51:17.790Z] =================================================================================================================== 00:56:23.565 [2024-12-09T04:51:17.790Z] Total : 14414.50 56.31 0.00 0.00 0.00 0.00 0.00 00:56:23.565 00:56:23.822 true 00:56:23.822 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:23.822 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:56:24.079 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:56:24.079 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:56:24.079 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 788748 00:56:24.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:24.643 Nvme0n1 : 3.00 14520.33 56.72 0.00 0.00 0.00 0.00 0.00 00:56:24.643 [2024-12-09T04:51:18.868Z] =================================================================================================================== 00:56:24.643 [2024-12-09T04:51:18.868Z] Total : 14520.33 56.72 0.00 0.00 0.00 0.00 0.00 00:56:24.643 00:56:25.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:25.572 Nvme0n1 : 4.00 14573.25 56.93 0.00 0.00 0.00 0.00 0.00 00:56:25.572 [2024-12-09T04:51:19.797Z] =================================================================================================================== 00:56:25.572 [2024-12-09T04:51:19.797Z] Total : 14573.25 56.93 0.00 0.00 0.00 0.00 0.00 00:56:25.572 00:56:26.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:26.943 Nvme0n1 : 5.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:56:26.943 [2024-12-09T04:51:21.168Z] =================================================================================================================== 00:56:26.943 [2024-12-09T04:51:21.168Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:56:26.943 00:56:27.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:27.876 Nvme0n1 : 6.00 14668.50 57.30 0.00 0.00 0.00 0.00 0.00 00:56:27.876 [2024-12-09T04:51:22.101Z] =================================================================================================================== 00:56:27.876 [2024-12-09T04:51:22.101Z] Total : 14668.50 57.30 0.00 0.00 0.00 0.00 0.00 00:56:27.876 00:56:28.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:28.810 Nvme0n1 : 7.00 14718.71 57.49 0.00 0.00 0.00 0.00 0.00 00:56:28.810 [2024-12-09T04:51:23.035Z] =================================================================================================================== 00:56:28.810 [2024-12-09T04:51:23.035Z] Total : 14718.71 57.49 0.00 0.00 0.00 0.00 0.00 00:56:28.810 00:56:29.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:29.747 Nvme0n1 : 8.00 14768.00 57.69 0.00 0.00 0.00 0.00 0.00 00:56:29.747 [2024-12-09T04:51:23.972Z] =================================================================================================================== 00:56:29.747 [2024-12-09T04:51:23.972Z] Total : 14768.00 57.69 0.00 0.00 0.00 0.00 0.00 00:56:29.747 00:56:30.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:30.681 Nvme0n1 : 9.00 14806.33 57.84 0.00 0.00 0.00 0.00 0.00 00:56:30.681 [2024-12-09T04:51:24.906Z] =================================================================================================================== 00:56:30.681 [2024-12-09T04:51:24.906Z] Total : 14806.33 57.84 0.00 0.00 0.00 0.00 0.00 00:56:30.681 00:56:31.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:31.623 Nvme0n1 : 10.00 14818.00 57.88 0.00 0.00 0.00 0.00 0.00 00:56:31.623 [2024-12-09T04:51:25.848Z] =================================================================================================================== 00:56:31.623 [2024-12-09T04:51:25.848Z] Total : 14818.00 57.88 0.00 0.00 0.00 0.00 0.00 00:56:31.623 00:56:31.623 00:56:31.624 Latency(us) 00:56:31.624 [2024-12-09T04:51:25.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:31.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:56:31.624 Nvme0n1 : 10.01 14824.10 57.91 0.00 0.00 8629.07 4223.43 19029.71 00:56:31.624 [2024-12-09T04:51:25.849Z] =================================================================================================================== 00:56:31.624 [2024-12-09T04:51:25.849Z] Total : 14824.10 57.91 0.00 0.00 8629.07 4223.43 19029.71 00:56:31.624 { 00:56:31.624 "results": [ 00:56:31.624 { 00:56:31.624 "job": "Nvme0n1", 00:56:31.624 "core_mask": "0x2", 00:56:31.624 "workload": "randwrite", 00:56:31.624 "status": "finished", 00:56:31.624 "queue_depth": 128, 00:56:31.624 "io_size": 4096, 00:56:31.624 "runtime": 10.008768, 00:56:31.624 "iops": 14824.10222716722, 00:56:31.624 "mibps": 57.90664932487195, 00:56:31.624 "io_failed": 0, 00:56:31.624 "io_timeout": 0, 00:56:31.624 "avg_latency_us": 8629.072060113573, 00:56:31.624 "min_latency_us": 4223.431111111111, 00:56:31.624 "max_latency_us": 19029.712592592594 00:56:31.624 } 00:56:31.624 ], 00:56:31.624 "core_count": 1 00:56:31.624 } 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 788613 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 788613 ']' 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 788613 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:31.624 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 788613 00:56:31.881 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:31.881 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:31.881 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 788613' 00:56:31.881 killing process with pid 788613 00:56:31.881 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 788613 00:56:31.881 Received shutdown signal, test time was about 10.000000 seconds 00:56:31.881 00:56:31.881 Latency(us) 00:56:31.881 [2024-12-09T04:51:26.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:31.881 [2024-12-09T04:51:26.106Z] =================================================================================================================== 00:56:31.881 [2024-12-09T04:51:26.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:56:31.881 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 788613 00:56:32.138 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:56:32.396 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:56:32.653 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:32.653 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 785400 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 785400 00:56:32.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 785400 Killed "${NVMF_APP[@]}" "$@" 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:56:32.910 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=790072 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 790072 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 790072 ']' 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:32.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:32.911 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:56:33.168 [2024-12-09 05:51:27.148947] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:56:33.168 [2024-12-09 05:51:27.150105] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:33.168 [2024-12-09 05:51:27.150185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:33.168 [2024-12-09 05:51:27.224613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:33.168 [2024-12-09 05:51:27.283046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:33.168 [2024-12-09 05:51:27.283115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:33.168 [2024-12-09 05:51:27.283143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:33.168 [2024-12-09 05:51:27.283154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:33.168 [2024-12-09 05:51:27.283171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:33.168 [2024-12-09 05:51:27.283798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:33.168 [2024-12-09 05:51:27.380803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:56:33.168 [2024-12-09 05:51:27.381122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:33.424 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:56:33.680 [2024-12-09 05:51:27.686561] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:56:33.681 [2024-12-09 05:51:27.686718] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:56:33.681 [2024-12-09 05:51:27.686765] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9547f614-7918-46da-942c-59c830660e12 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9547f614-7918-46da-942c-59c830660e12 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:56:33.681 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:56:33.937 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9547f614-7918-46da-942c-59c830660e12 -t 2000 00:56:34.194 [ 00:56:34.194 { 00:56:34.194 "name": "9547f614-7918-46da-942c-59c830660e12", 00:56:34.194 "aliases": [ 00:56:34.194 "lvs/lvol" 00:56:34.194 ], 00:56:34.194 "product_name": "Logical Volume", 00:56:34.194 "block_size": 4096, 00:56:34.194 "num_blocks": 38912, 00:56:34.194 "uuid": "9547f614-7918-46da-942c-59c830660e12", 00:56:34.194 "assigned_rate_limits": { 00:56:34.194 "rw_ios_per_sec": 0, 00:56:34.194 "rw_mbytes_per_sec": 0, 00:56:34.194 "r_mbytes_per_sec": 0, 00:56:34.194 "w_mbytes_per_sec": 0 00:56:34.194 }, 00:56:34.194 "claimed": false, 00:56:34.194 "zoned": false, 00:56:34.194 "supported_io_types": { 00:56:34.194 "read": true, 00:56:34.194 "write": true, 00:56:34.194 "unmap": true, 00:56:34.194 "flush": false, 00:56:34.194 "reset": true, 00:56:34.195 "nvme_admin": false, 00:56:34.195 "nvme_io": false, 00:56:34.195 "nvme_io_md": false, 00:56:34.195 "write_zeroes": true, 00:56:34.195 "zcopy": false, 00:56:34.195 "get_zone_info": false, 00:56:34.195 "zone_management": false, 00:56:34.195 "zone_append": false, 00:56:34.195 "compare": false, 00:56:34.195 "compare_and_write": false, 00:56:34.195 "abort": false, 00:56:34.195 "seek_hole": true, 00:56:34.195 "seek_data": true, 00:56:34.195 "copy": false, 00:56:34.195 "nvme_iov_md": false 00:56:34.195 }, 00:56:34.195 "driver_specific": { 00:56:34.195 "lvol": { 00:56:34.195 "lvol_store_uuid": "1a5fc114-d49c-4d3c-bbf7-9f83ef75e266", 00:56:34.195 "base_bdev": "aio_bdev", 00:56:34.195 "thin_provision": false, 00:56:34.195 "num_allocated_clusters": 38, 00:56:34.195 "snapshot": false, 00:56:34.195 "clone": false, 00:56:34.195 "esnap_clone": false 00:56:34.195 } 00:56:34.195 } 00:56:34.195 } 00:56:34.195 ] 00:56:34.195 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:56:34.195 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:34.195 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:56:34.510 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:56:34.510 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:34.510 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:56:34.767 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:56:34.767 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:56:35.024 [2024-12-09 05:51:29.064433] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:56:35.024 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:35.282 request: 00:56:35.282 { 00:56:35.282 "uuid": "1a5fc114-d49c-4d3c-bbf7-9f83ef75e266", 00:56:35.282 "method": "bdev_lvol_get_lvstores", 00:56:35.282 "req_id": 1 00:56:35.282 } 00:56:35.282 Got JSON-RPC error response 00:56:35.282 response: 00:56:35.282 { 00:56:35.282 "code": -19, 00:56:35.282 "message": "No such device" 00:56:35.282 } 00:56:35.282 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:56:35.282 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:56:35.282 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:56:35.282 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:56:35.282 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:56:35.539 aio_bdev 00:56:35.539 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9547f614-7918-46da-942c-59c830660e12 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9547f614-7918-46da-942c-59c830660e12 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:56:35.540 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:56:35.797 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9547f614-7918-46da-942c-59c830660e12 -t 2000 00:56:36.054 [ 00:56:36.054 { 00:56:36.054 "name": "9547f614-7918-46da-942c-59c830660e12", 00:56:36.054 "aliases": [ 00:56:36.054 "lvs/lvol" 00:56:36.054 ], 00:56:36.054 "product_name": "Logical Volume", 00:56:36.054 "block_size": 4096, 00:56:36.054 "num_blocks": 38912, 00:56:36.054 "uuid": "9547f614-7918-46da-942c-59c830660e12", 00:56:36.054 "assigned_rate_limits": { 00:56:36.054 "rw_ios_per_sec": 0, 00:56:36.054 "rw_mbytes_per_sec": 0, 00:56:36.054 "r_mbytes_per_sec": 0, 00:56:36.054 "w_mbytes_per_sec": 0 00:56:36.054 }, 00:56:36.054 "claimed": false, 00:56:36.054 "zoned": false, 00:56:36.054 "supported_io_types": { 00:56:36.054 "read": true, 00:56:36.054 "write": true, 00:56:36.054 "unmap": true, 00:56:36.054 "flush": false, 00:56:36.054 "reset": true, 00:56:36.054 "nvme_admin": false, 00:56:36.054 "nvme_io": false, 00:56:36.054 "nvme_io_md": false, 00:56:36.054 "write_zeroes": true, 00:56:36.054 "zcopy": false, 00:56:36.054 "get_zone_info": false, 00:56:36.054 "zone_management": false, 00:56:36.054 "zone_append": false, 00:56:36.054 "compare": false, 00:56:36.054 "compare_and_write": false, 00:56:36.054 "abort": false, 00:56:36.054 "seek_hole": true, 00:56:36.054 "seek_data": true, 00:56:36.054 "copy": false, 00:56:36.054 "nvme_iov_md": false 00:56:36.054 }, 00:56:36.054 "driver_specific": { 00:56:36.054 "lvol": { 00:56:36.054 "lvol_store_uuid": "1a5fc114-d49c-4d3c-bbf7-9f83ef75e266", 00:56:36.054 "base_bdev": "aio_bdev", 00:56:36.054 "thin_provision": false, 00:56:36.054 "num_allocated_clusters": 38, 00:56:36.054 "snapshot": false, 00:56:36.054 "clone": false, 00:56:36.054 "esnap_clone": false 00:56:36.054 } 00:56:36.054 } 00:56:36.054 } 00:56:36.054 ] 00:56:36.054 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:56:36.054 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:36.054 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:56:36.619 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:56:36.619 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:36.619 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:56:36.619 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:56:36.619 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9547f614-7918-46da-942c-59c830660e12 00:56:36.876 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a5fc114-d49c-4d3c-bbf7-9f83ef75e266 00:56:37.443 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:56:37.443 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:56:37.700 00:56:37.701 real 0m19.741s 00:56:37.701 user 0m36.785s 00:56:37.701 sys 0m4.605s 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:56:37.701 ************************************ 00:56:37.701 END TEST lvs_grow_dirty 00:56:37.701 ************************************ 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:56:37.701 nvmf_trace.0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:37.701 rmmod nvme_tcp 00:56:37.701 rmmod nvme_fabrics 00:56:37.701 rmmod nvme_keyring 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 790072 ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 790072 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 790072 ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 790072 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 790072 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 790072' 00:56:37.701 killing process with pid 790072 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 790072 00:56:37.701 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 790072 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:37.959 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:56:40.498 00:56:40.498 real 0m43.282s 00:56:40.498 user 0m56.167s 00:56:40.498 sys 0m8.506s 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:56:40.498 ************************************ 00:56:40.498 END TEST nvmf_lvs_grow 00:56:40.498 ************************************ 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:56:40.498 ************************************ 00:56:40.498 START TEST nvmf_bdev_io_wait 00:56:40.498 ************************************ 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:56:40.498 * Looking for test storage... 00:56:40.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:56:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:40.498 --rc genhtml_branch_coverage=1 00:56:40.498 --rc genhtml_function_coverage=1 00:56:40.498 --rc genhtml_legend=1 00:56:40.498 --rc geninfo_all_blocks=1 00:56:40.498 --rc geninfo_unexecuted_blocks=1 00:56:40.498 00:56:40.498 ' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:56:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:40.498 --rc genhtml_branch_coverage=1 00:56:40.498 --rc genhtml_function_coverage=1 00:56:40.498 --rc genhtml_legend=1 00:56:40.498 --rc geninfo_all_blocks=1 00:56:40.498 --rc geninfo_unexecuted_blocks=1 00:56:40.498 00:56:40.498 ' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:56:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:40.498 --rc genhtml_branch_coverage=1 00:56:40.498 --rc genhtml_function_coverage=1 00:56:40.498 --rc genhtml_legend=1 00:56:40.498 --rc geninfo_all_blocks=1 00:56:40.498 --rc geninfo_unexecuted_blocks=1 00:56:40.498 00:56:40.498 ' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:56:40.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:40.498 --rc genhtml_branch_coverage=1 00:56:40.498 --rc genhtml_function_coverage=1 00:56:40.498 --rc genhtml_legend=1 00:56:40.498 --rc geninfo_all_blocks=1 00:56:40.498 --rc geninfo_unexecuted_blocks=1 00:56:40.498 00:56:40.498 ' 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:40.498 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:56:40.499 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:42.411 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:56:42.412 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:56:42.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:56:42.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:56:42.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:56:42.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:42.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:56:42.412 00:56:42.412 --- 10.0.0.2 ping statistics --- 00:56:42.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:42.412 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:42.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:42.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:56:42.412 00:56:42.412 --- 10.0.0.1 ping statistics --- 00:56:42.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:42.412 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:56:42.412 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=792596 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 792596 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 792596 ']' 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:42.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:42.413 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.413 [2024-12-09 05:51:36.500651] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:56:42.413 [2024-12-09 05:51:36.501747] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:42.413 [2024-12-09 05:51:36.501813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:42.413 [2024-12-09 05:51:36.572643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:42.413 [2024-12-09 05:51:36.631300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:42.413 [2024-12-09 05:51:36.631351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:42.413 [2024-12-09 05:51:36.631367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:42.413 [2024-12-09 05:51:36.631379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:42.413 [2024-12-09 05:51:36.631390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:42.413 [2024-12-09 05:51:36.633092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:42.413 [2024-12-09 05:51:36.633158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:42.413 [2024-12-09 05:51:36.633209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:42.413 [2024-12-09 05:51:36.633212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:42.413 [2024-12-09 05:51:36.633753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.671 [2024-12-09 05:51:36.818559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:56:42.671 [2024-12-09 05:51:36.818765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:56:42.671 [2024-12-09 05:51:36.819776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:56:42.671 [2024-12-09 05:51:36.820657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.671 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.672 [2024-12-09 05:51:36.825972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.672 Malloc0 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:42.672 [2024-12-09 05:51:36.882145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=792622 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=792624 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:42.672 { 00:56:42.672 "params": { 00:56:42.672 "name": "Nvme$subsystem", 00:56:42.672 "trtype": "$TEST_TRANSPORT", 00:56:42.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:42.672 "adrfam": "ipv4", 00:56:42.672 "trsvcid": "$NVMF_PORT", 00:56:42.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:42.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:42.672 "hdgst": ${hdgst:-false}, 00:56:42.672 "ddgst": ${ddgst:-false} 00:56:42.672 }, 00:56:42.672 "method": "bdev_nvme_attach_controller" 00:56:42.672 } 00:56:42.672 EOF 00:56:42.672 )") 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=792626 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=792628 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:42.672 { 00:56:42.672 "params": { 00:56:42.672 "name": "Nvme$subsystem", 00:56:42.672 "trtype": "$TEST_TRANSPORT", 00:56:42.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:42.672 "adrfam": "ipv4", 00:56:42.672 "trsvcid": "$NVMF_PORT", 00:56:42.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:42.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:42.672 "hdgst": ${hdgst:-false}, 00:56:42.672 "ddgst": ${ddgst:-false} 00:56:42.672 }, 00:56:42.672 "method": "bdev_nvme_attach_controller" 00:56:42.672 } 00:56:42.672 EOF 00:56:42.672 )") 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:42.672 { 00:56:42.672 "params": { 00:56:42.672 "name": "Nvme$subsystem", 00:56:42.672 "trtype": "$TEST_TRANSPORT", 00:56:42.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:42.672 "adrfam": "ipv4", 00:56:42.672 "trsvcid": "$NVMF_PORT", 00:56:42.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:42.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:42.672 "hdgst": ${hdgst:-false}, 00:56:42.672 "ddgst": ${ddgst:-false} 00:56:42.672 }, 00:56:42.672 "method": "bdev_nvme_attach_controller" 00:56:42.672 } 00:56:42.672 EOF 00:56:42.672 )") 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:42.672 { 00:56:42.672 "params": { 00:56:42.672 "name": "Nvme$subsystem", 00:56:42.672 "trtype": "$TEST_TRANSPORT", 00:56:42.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:42.672 "adrfam": "ipv4", 00:56:42.672 "trsvcid": "$NVMF_PORT", 00:56:42.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:42.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:42.672 "hdgst": ${hdgst:-false}, 00:56:42.672 "ddgst": ${ddgst:-false} 00:56:42.672 }, 00:56:42.672 "method": "bdev_nvme_attach_controller" 00:56:42.672 } 00:56:42.672 EOF 00:56:42.672 )") 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 792622 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:42.672 "params": { 00:56:42.672 "name": "Nvme1", 00:56:42.672 "trtype": "tcp", 00:56:42.672 "traddr": "10.0.0.2", 00:56:42.672 "adrfam": "ipv4", 00:56:42.672 "trsvcid": "4420", 00:56:42.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:42.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:42.672 "hdgst": false, 00:56:42.672 "ddgst": false 00:56:42.672 }, 00:56:42.672 "method": "bdev_nvme_attach_controller" 00:56:42.672 }' 00:56:42.672 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:42.931 "params": { 00:56:42.931 "name": "Nvme1", 00:56:42.931 "trtype": "tcp", 00:56:42.931 "traddr": "10.0.0.2", 00:56:42.931 "adrfam": "ipv4", 00:56:42.931 "trsvcid": "4420", 00:56:42.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:42.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:42.931 "hdgst": false, 00:56:42.931 "ddgst": false 00:56:42.931 }, 00:56:42.931 "method": "bdev_nvme_attach_controller" 00:56:42.931 }' 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:42.931 "params": { 00:56:42.931 "name": "Nvme1", 00:56:42.931 "trtype": "tcp", 00:56:42.931 "traddr": "10.0.0.2", 00:56:42.931 "adrfam": "ipv4", 00:56:42.931 "trsvcid": "4420", 00:56:42.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:42.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:42.931 "hdgst": false, 00:56:42.931 "ddgst": false 00:56:42.931 }, 00:56:42.931 "method": "bdev_nvme_attach_controller" 00:56:42.931 }' 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:56:42.931 05:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:42.931 "params": { 00:56:42.931 "name": "Nvme1", 00:56:42.931 "trtype": "tcp", 00:56:42.931 "traddr": "10.0.0.2", 00:56:42.931 "adrfam": "ipv4", 00:56:42.931 "trsvcid": "4420", 00:56:42.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:42.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:42.931 "hdgst": false, 00:56:42.931 "ddgst": false 00:56:42.931 }, 00:56:42.931 "method": "bdev_nvme_attach_controller" 00:56:42.931 }' 00:56:42.931 [2024-12-09 05:51:36.934647] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:42.931 [2024-12-09 05:51:36.934648] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:42.931 [2024-12-09 05:51:36.934647] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:42.931 [2024-12-09 05:51:36.934657] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:42.931 [2024-12-09 05:51:36.934732] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 05:51:36.934731] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 05:51:36.934731] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:56:42.931 --proc-type=auto ] 00:56:42.931 [2024-12-09 05:51:36.934750] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:56:42.931 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:56:42.931 [2024-12-09 05:51:37.123980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:43.189 [2024-12-09 05:51:37.179231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:56:43.189 [2024-12-09 05:51:37.226723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:43.189 [2024-12-09 05:51:37.280448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:56:43.189 [2024-12-09 05:51:37.352146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:43.189 [2024-12-09 05:51:37.409497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:43.189 [2024-12-09 05:51:37.412950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:56:43.446 [2024-12-09 05:51:37.460640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:56:43.446 Running I/O for 1 seconds... 00:56:43.446 Running I/O for 1 seconds... 00:56:43.704 Running I/O for 1 seconds... 00:56:43.704 Running I/O for 1 seconds... 00:56:44.637 10871.00 IOPS, 42.46 MiB/s 00:56:44.637 Latency(us) 00:56:44.637 [2024-12-09T04:51:38.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.637 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:56:44.637 Nvme1n1 : 1.01 10917.10 42.64 0.00 0.00 11679.06 4223.43 14078.10 00:56:44.637 [2024-12-09T04:51:38.862Z] =================================================================================================================== 00:56:44.637 [2024-12-09T04:51:38.862Z] Total : 10917.10 42.64 0.00 0.00 11679.06 4223.43 14078.10 00:56:44.637 5687.00 IOPS, 22.21 MiB/s 00:56:44.637 Latency(us) 00:56:44.637 [2024-12-09T04:51:38.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.637 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:56:44.637 Nvme1n1 : 1.02 5683.22 22.20 0.00 0.00 22233.17 2402.99 34758.35 00:56:44.637 [2024-12-09T04:51:38.862Z] =================================================================================================================== 00:56:44.637 [2024-12-09T04:51:38.862Z] Total : 5683.22 22.20 0.00 0.00 22233.17 2402.99 34758.35 00:56:44.637 182648.00 IOPS, 713.47 MiB/s 00:56:44.637 Latency(us) 00:56:44.637 [2024-12-09T04:51:38.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.637 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:56:44.637 Nvme1n1 : 1.00 182294.15 712.09 0.00 0.00 698.30 315.54 1893.26 00:56:44.637 [2024-12-09T04:51:38.862Z] =================================================================================================================== 00:56:44.637 [2024-12-09T04:51:38.862Z] Total : 182294.15 712.09 0.00 0.00 698.30 315.54 1893.26 00:56:44.637 5850.00 IOPS, 22.85 MiB/s 00:56:44.637 Latency(us) 00:56:44.637 [2024-12-09T04:51:38.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:44.637 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:56:44.637 Nvme1n1 : 1.01 5957.64 23.27 0.00 0.00 21422.90 4126.34 43690.67 00:56:44.637 [2024-12-09T04:51:38.862Z] =================================================================================================================== 00:56:44.637 [2024-12-09T04:51:38.862Z] Total : 5957.64 23.27 0.00 0.00 21422.90 4126.34 43690.67 00:56:44.637 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 792624 00:56:44.637 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 792626 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 792628 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:44.894 05:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:44.894 rmmod nvme_tcp 00:56:44.894 rmmod nvme_fabrics 00:56:44.894 rmmod nvme_keyring 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 792596 ']' 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 792596 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 792596 ']' 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 792596 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 792596 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 792596' 00:56:44.894 killing process with pid 792596 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 792596 00:56:44.894 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 792596 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:45.152 05:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:56:47.680 00:56:47.680 real 0m7.187s 00:56:47.680 user 0m14.563s 00:56:47.680 sys 0m4.019s 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:56:47.680 ************************************ 00:56:47.680 END TEST nvmf_bdev_io_wait 00:56:47.680 ************************************ 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:56:47.680 ************************************ 00:56:47.680 START TEST nvmf_queue_depth 00:56:47.680 ************************************ 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:56:47.680 * Looking for test storage... 00:56:47.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:56:47.680 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:56:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:47.681 --rc genhtml_branch_coverage=1 00:56:47.681 --rc genhtml_function_coverage=1 00:56:47.681 --rc genhtml_legend=1 00:56:47.681 --rc geninfo_all_blocks=1 00:56:47.681 --rc geninfo_unexecuted_blocks=1 00:56:47.681 00:56:47.681 ' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:56:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:47.681 --rc genhtml_branch_coverage=1 00:56:47.681 --rc genhtml_function_coverage=1 00:56:47.681 --rc genhtml_legend=1 00:56:47.681 --rc geninfo_all_blocks=1 00:56:47.681 --rc geninfo_unexecuted_blocks=1 00:56:47.681 00:56:47.681 ' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:56:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:47.681 --rc genhtml_branch_coverage=1 00:56:47.681 --rc genhtml_function_coverage=1 00:56:47.681 --rc genhtml_legend=1 00:56:47.681 --rc geninfo_all_blocks=1 00:56:47.681 --rc geninfo_unexecuted_blocks=1 00:56:47.681 00:56:47.681 ' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:56:47.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:47.681 --rc genhtml_branch_coverage=1 00:56:47.681 --rc genhtml_function_coverage=1 00:56:47.681 --rc genhtml_legend=1 00:56:47.681 --rc geninfo_all_blocks=1 00:56:47.681 --rc geninfo_unexecuted_blocks=1 00:56:47.681 00:56:47.681 ' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:56:47.681 05:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:49.606 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:56:49.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:56:49.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:56:49.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:56:49.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:56:49.607 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:56:49.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:49.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:56:49.866 00:56:49.866 --- 10.0.0.2 ping statistics --- 00:56:49.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:49.866 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:49.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:49.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:56:49.866 00:56:49.866 --- 10.0.0.1 ping statistics --- 00:56:49.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:49.866 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=794966 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 794966 00:56:49.866 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 794966 ']' 00:56:49.867 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:49.867 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:49.867 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:49.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:49.867 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:49.867 05:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:49.867 [2024-12-09 05:51:43.969380] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:56:49.867 [2024-12-09 05:51:43.970449] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:49.867 [2024-12-09 05:51:43.970508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:49.867 [2024-12-09 05:51:44.045826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:50.125 [2024-12-09 05:51:44.102110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:50.125 [2024-12-09 05:51:44.102176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:50.125 [2024-12-09 05:51:44.102199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:50.125 [2024-12-09 05:51:44.102209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:50.125 [2024-12-09 05:51:44.102218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:50.125 [2024-12-09 05:51:44.102784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:50.125 [2024-12-09 05:51:44.188342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:56:50.125 [2024-12-09 05:51:44.188649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:50.125 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 [2024-12-09 05:51:44.239411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 Malloc0 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 [2024-12-09 05:51:44.295502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=794995 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 794995 /var/tmp/bdevperf.sock 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 794995 ']' 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:56:50.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:50.126 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.126 [2024-12-09 05:51:44.341183] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:56:50.126 [2024-12-09 05:51:44.341246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794995 ] 00:56:50.384 [2024-12-09 05:51:44.408485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:50.384 [2024-12-09 05:51:44.464913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:50.384 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:50.384 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:56:50.384 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:56:50.384 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:50.384 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:56:50.642 NVMe0n1 00:56:50.642 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:50.642 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:56:50.642 Running I/O for 10 seconds... 00:56:52.944 8192.00 IOPS, 32.00 MiB/s [2024-12-09T04:51:48.103Z] 8192.50 IOPS, 32.00 MiB/s [2024-12-09T04:51:49.037Z] 8201.33 IOPS, 32.04 MiB/s [2024-12-09T04:51:49.969Z] 8278.75 IOPS, 32.34 MiB/s [2024-12-09T04:51:50.904Z] 8347.60 IOPS, 32.61 MiB/s [2024-12-09T04:51:52.275Z] 8363.50 IOPS, 32.67 MiB/s [2024-12-09T04:51:53.207Z] 8343.57 IOPS, 32.59 MiB/s [2024-12-09T04:51:54.210Z] 8393.75 IOPS, 32.79 MiB/s [2024-12-09T04:51:55.162Z] 8417.67 IOPS, 32.88 MiB/s [2024-12-09T04:51:55.162Z] 8402.10 IOPS, 32.82 MiB/s 00:57:00.937 Latency(us) 00:57:00.937 [2024-12-09T04:51:55.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:00.937 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:57:00.937 Verification LBA range: start 0x0 length 0x4000 00:57:00.937 NVMe0n1 : 10.07 8446.79 33.00 0.00 0.00 120735.88 14272.28 71458.51 00:57:00.937 [2024-12-09T04:51:55.162Z] =================================================================================================================== 00:57:00.937 [2024-12-09T04:51:55.162Z] Total : 8446.79 33.00 0.00 0.00 120735.88 14272.28 71458.51 00:57:00.937 { 00:57:00.937 "results": [ 00:57:00.937 { 00:57:00.937 "job": "NVMe0n1", 00:57:00.937 "core_mask": "0x1", 00:57:00.937 "workload": "verify", 00:57:00.937 "status": "finished", 00:57:00.937 "verify_range": { 00:57:00.937 "start": 0, 00:57:00.937 "length": 16384 00:57:00.937 }, 00:57:00.937 "queue_depth": 1024, 00:57:00.937 "io_size": 4096, 00:57:00.937 "runtime": 10.068327, 00:57:00.937 "iops": 8446.785647704926, 00:57:00.937 "mibps": 32.99525643634737, 00:57:00.937 "io_failed": 0, 00:57:00.937 "io_timeout": 0, 00:57:00.937 "avg_latency_us": 120735.87584549356, 00:57:00.937 "min_latency_us": 14272.284444444444, 00:57:00.937 "max_latency_us": 71458.5125925926 00:57:00.937 } 00:57:00.937 ], 00:57:00.937 "core_count": 1 00:57:00.937 } 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 794995 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 794995 ']' 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 794995 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:00.937 05:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794995 00:57:00.937 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:00.937 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:00.937 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794995' 00:57:00.937 killing process with pid 794995 00:57:00.937 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 794995 00:57:00.937 Received shutdown signal, test time was about 10.000000 seconds 00:57:00.937 00:57:00.937 Latency(us) 00:57:00.937 [2024-12-09T04:51:55.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:00.937 [2024-12-09T04:51:55.162Z] =================================================================================================================== 00:57:00.937 [2024-12-09T04:51:55.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:57:00.937 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 794995 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:01.194 rmmod nvme_tcp 00:57:01.194 rmmod nvme_fabrics 00:57:01.194 rmmod nvme_keyring 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 794966 ']' 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 794966 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 794966 ']' 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 794966 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794966 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794966' 00:57:01.194 killing process with pid 794966 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 794966 00:57:01.194 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 794966 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:01.453 05:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:03.993 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:03.993 00:57:03.993 real 0m16.276s 00:57:03.993 user 0m22.375s 00:57:03.994 sys 0m3.406s 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:57:03.994 ************************************ 00:57:03.994 END TEST nvmf_queue_depth 00:57:03.994 ************************************ 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:57:03.994 ************************************ 00:57:03.994 START TEST nvmf_target_multipath 00:57:03.994 ************************************ 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:57:03.994 * Looking for test storage... 00:57:03.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:57:03.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:03.994 --rc genhtml_branch_coverage=1 00:57:03.994 --rc genhtml_function_coverage=1 00:57:03.994 --rc genhtml_legend=1 00:57:03.994 --rc geninfo_all_blocks=1 00:57:03.994 --rc geninfo_unexecuted_blocks=1 00:57:03.994 00:57:03.994 ' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:57:03.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:03.994 --rc genhtml_branch_coverage=1 00:57:03.994 --rc genhtml_function_coverage=1 00:57:03.994 --rc genhtml_legend=1 00:57:03.994 --rc geninfo_all_blocks=1 00:57:03.994 --rc geninfo_unexecuted_blocks=1 00:57:03.994 00:57:03.994 ' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:57:03.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:03.994 --rc genhtml_branch_coverage=1 00:57:03.994 --rc genhtml_function_coverage=1 00:57:03.994 --rc genhtml_legend=1 00:57:03.994 --rc geninfo_all_blocks=1 00:57:03.994 --rc geninfo_unexecuted_blocks=1 00:57:03.994 00:57:03.994 ' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:57:03.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:03.994 --rc genhtml_branch_coverage=1 00:57:03.994 --rc genhtml_function_coverage=1 00:57:03.994 --rc genhtml_legend=1 00:57:03.994 --rc geninfo_all_blocks=1 00:57:03.994 --rc geninfo_unexecuted_blocks=1 00:57:03.994 00:57:03.994 ' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:03.994 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:57:03.995 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:05.896 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:57:05.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:57:05.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:57:05.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:57:05.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:05.897 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:06.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:06.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:57:06.155 00:57:06.155 --- 10.0.0.2 ping statistics --- 00:57:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:06.155 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:06.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:06.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:57:06.155 00:57:06.155 --- 10.0.0.1 ping statistics --- 00:57:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:06.155 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:57:06.155 only one NIC for nvmf test 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:06.155 rmmod nvme_tcp 00:57:06.155 rmmod nvme_fabrics 00:57:06.155 rmmod nvme_keyring 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:06.155 05:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:08.681 00:57:08.681 real 0m4.657s 00:57:08.681 user 0m0.926s 00:57:08.681 sys 0m1.678s 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:57:08.681 ************************************ 00:57:08.681 END TEST nvmf_target_multipath 00:57:08.681 ************************************ 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:57:08.681 ************************************ 00:57:08.681 START TEST nvmf_zcopy 00:57:08.681 ************************************ 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:57:08.681 * Looking for test storage... 00:57:08.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:57:08.681 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:57:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:08.682 --rc genhtml_branch_coverage=1 00:57:08.682 --rc genhtml_function_coverage=1 00:57:08.682 --rc genhtml_legend=1 00:57:08.682 --rc geninfo_all_blocks=1 00:57:08.682 --rc geninfo_unexecuted_blocks=1 00:57:08.682 00:57:08.682 ' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:57:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:08.682 --rc genhtml_branch_coverage=1 00:57:08.682 --rc genhtml_function_coverage=1 00:57:08.682 --rc genhtml_legend=1 00:57:08.682 --rc geninfo_all_blocks=1 00:57:08.682 --rc geninfo_unexecuted_blocks=1 00:57:08.682 00:57:08.682 ' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:57:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:08.682 --rc genhtml_branch_coverage=1 00:57:08.682 --rc genhtml_function_coverage=1 00:57:08.682 --rc genhtml_legend=1 00:57:08.682 --rc geninfo_all_blocks=1 00:57:08.682 --rc geninfo_unexecuted_blocks=1 00:57:08.682 00:57:08.682 ' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:57:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:08.682 --rc genhtml_branch_coverage=1 00:57:08.682 --rc genhtml_function_coverage=1 00:57:08.682 --rc genhtml_legend=1 00:57:08.682 --rc geninfo_all_blocks=1 00:57:08.682 --rc geninfo_unexecuted_blocks=1 00:57:08.682 00:57:08.682 ' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:08.682 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:57:08.683 05:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:57:10.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:57:10.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:57:10.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:57:10.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:10.586 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:10.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:10.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:57:10.587 00:57:10.587 --- 10.0.0.2 ping statistics --- 00:57:10.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:10.587 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:10.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:10.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:57:10.587 00:57:10.587 --- 10.0.0.1 ping statistics --- 00:57:10.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:10.587 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=800178 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 800178 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 800178 ']' 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:10.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:10.587 05:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:10.845 [2024-12-09 05:52:04.844849] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:57:10.845 [2024-12-09 05:52:04.845989] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:57:10.845 [2024-12-09 05:52:04.846045] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:10.845 [2024-12-09 05:52:04.917172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:10.845 [2024-12-09 05:52:04.972240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:10.845 [2024-12-09 05:52:04.972301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:10.845 [2024-12-09 05:52:04.972332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:10.845 [2024-12-09 05:52:04.972343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:10.845 [2024-12-09 05:52:04.972352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:10.845 [2024-12-09 05:52:04.972886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:10.845 [2024-12-09 05:52:05.060401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:57:10.845 [2024-12-09 05:52:05.060727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 [2024-12-09 05:52:05.113546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 [2024-12-09 05:52:05.129722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.101 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.101 malloc0 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:11.102 { 00:57:11.102 "params": { 00:57:11.102 "name": "Nvme$subsystem", 00:57:11.102 "trtype": "$TEST_TRANSPORT", 00:57:11.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:11.102 "adrfam": "ipv4", 00:57:11.102 "trsvcid": "$NVMF_PORT", 00:57:11.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:11.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:11.102 "hdgst": ${hdgst:-false}, 00:57:11.102 "ddgst": ${ddgst:-false} 00:57:11.102 }, 00:57:11.102 "method": "bdev_nvme_attach_controller" 00:57:11.102 } 00:57:11.102 EOF 00:57:11.102 )") 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:57:11.102 05:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:57:11.102 "params": { 00:57:11.102 "name": "Nvme1", 00:57:11.102 "trtype": "tcp", 00:57:11.102 "traddr": "10.0.0.2", 00:57:11.102 "adrfam": "ipv4", 00:57:11.102 "trsvcid": "4420", 00:57:11.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:11.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:11.102 "hdgst": false, 00:57:11.102 "ddgst": false 00:57:11.102 }, 00:57:11.102 "method": "bdev_nvme_attach_controller" 00:57:11.102 }' 00:57:11.102 [2024-12-09 05:52:05.213851] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:57:11.102 [2024-12-09 05:52:05.213916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800199 ] 00:57:11.102 [2024-12-09 05:52:05.280160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:11.359 [2024-12-09 05:52:05.343586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:11.359 Running I/O for 10 seconds... 00:57:13.664 5560.00 IOPS, 43.44 MiB/s [2024-12-09T04:52:08.822Z] 5523.50 IOPS, 43.15 MiB/s [2024-12-09T04:52:09.753Z] 5586.00 IOPS, 43.64 MiB/s [2024-12-09T04:52:10.681Z] 5612.25 IOPS, 43.85 MiB/s [2024-12-09T04:52:11.633Z] 5624.20 IOPS, 43.94 MiB/s [2024-12-09T04:52:13.002Z] 5642.17 IOPS, 44.08 MiB/s [2024-12-09T04:52:13.934Z] 5642.14 IOPS, 44.08 MiB/s [2024-12-09T04:52:14.866Z] 5643.62 IOPS, 44.09 MiB/s [2024-12-09T04:52:15.806Z] 5648.22 IOPS, 44.13 MiB/s [2024-12-09T04:52:15.806Z] 5656.40 IOPS, 44.19 MiB/s 00:57:21.581 Latency(us) 00:57:21.581 [2024-12-09T04:52:15.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:21.581 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:57:21.581 Verification LBA range: start 0x0 length 0x1000 00:57:21.581 Nvme1n1 : 10.02 5659.50 44.21 0.00 0.00 22555.92 3689.43 47768.46 00:57:21.581 [2024-12-09T04:52:15.806Z] =================================================================================================================== 00:57:21.581 [2024-12-09T04:52:15.806Z] Total : 5659.50 44.21 0.00 0.00 22555.92 3689.43 47768.46 00:57:21.839 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=801397 00:57:21.839 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:57:21.839 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:57:21.840 { 00:57:21.840 "params": { 00:57:21.840 "name": "Nvme$subsystem", 00:57:21.840 "trtype": "$TEST_TRANSPORT", 00:57:21.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:57:21.840 "adrfam": "ipv4", 00:57:21.840 "trsvcid": "$NVMF_PORT", 00:57:21.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:57:21.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:57:21.840 "hdgst": ${hdgst:-false}, 00:57:21.840 "ddgst": ${ddgst:-false} 00:57:21.840 }, 00:57:21.840 "method": "bdev_nvme_attach_controller" 00:57:21.840 } 00:57:21.840 EOF 00:57:21.840 )") 00:57:21.840 [2024-12-09 05:52:15.881418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.881460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:57:21.840 05:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:57:21.840 "params": { 00:57:21.840 "name": "Nvme1", 00:57:21.840 "trtype": "tcp", 00:57:21.840 "traddr": "10.0.0.2", 00:57:21.840 "adrfam": "ipv4", 00:57:21.840 "trsvcid": "4420", 00:57:21.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:57:21.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:57:21.840 "hdgst": false, 00:57:21.840 "ddgst": false 00:57:21.840 }, 00:57:21.840 "method": "bdev_nvme_attach_controller" 00:57:21.840 }' 00:57:21.840 [2024-12-09 05:52:15.889360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.889384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.897365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.897388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.905370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.905393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.913371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.913393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.921361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.921383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.925903] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:57:21.840 [2024-12-09 05:52:15.925988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801397 ] 00:57:21.840 [2024-12-09 05:52:15.929364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.929398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.937364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.937386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.945363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.945385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.953369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.953391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.961372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.961394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.969354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.969375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.977359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.977382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.985356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.985378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.993366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:15.993388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:15.997094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:21.840 [2024-12-09 05:52:16.001357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.001379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.009400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.009436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.017381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.017410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.025353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.025375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.033369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.033390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.041352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.041373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.049355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.049376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.057359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:21.840 [2024-12-09 05:52:16.057381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:21.840 [2024-12-09 05:52:16.061439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:22.098 [2024-12-09 05:52:16.065358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.098 [2024-12-09 05:52:16.065380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.098 [2024-12-09 05:52:16.073356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.098 [2024-12-09 05:52:16.073385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.098 [2024-12-09 05:52:16.081389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.098 [2024-12-09 05:52:16.081425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.098 [2024-12-09 05:52:16.089383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.098 [2024-12-09 05:52:16.089421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.098 [2024-12-09 05:52:16.097390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.097432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.105406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.105443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.113390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.113432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.121386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.121427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.129360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.129384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.137380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.137413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.145415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.145453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.153408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.153447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.161353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.161375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.169357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.169379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.177369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.177394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.185366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.185400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.193374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.193407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.201362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.201386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.209364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.209389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.217364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.217388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.225358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.225387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.233352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.233374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.241356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.241377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.249353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.249375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.257360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.257382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.265371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.265395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.273366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.273388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.281353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.281374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.289366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.289387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.297362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.297382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.305372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.305402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.313376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.313397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.099 [2024-12-09 05:52:16.321352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.099 [2024-12-09 05:52:16.321374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.329359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.329380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.337367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.337388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.345370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.345392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.353410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.353437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.394521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.394550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.401371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.401395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 Running I/O for 5 seconds... 00:57:22.357 [2024-12-09 05:52:16.409362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.409395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.426569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.426597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.437567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.437593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.448491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.448519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.462494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.462522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.472132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.472157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.487338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.487365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.503547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.503591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.519799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.519827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.535580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.535622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.551661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.551702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.567502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.567529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.357 [2024-12-09 05:52:16.577175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.357 [2024-12-09 05:52:16.577218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.588914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.588954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.599825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.599866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.614244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.614283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.623673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.623699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.639863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.639888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.655799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.655840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.671806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.671848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.689290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.689331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.698990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.699017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.710455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.710482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.721384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.721411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.732459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.732500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.745424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.745452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.755368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.755396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.771039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.771078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.614 [2024-12-09 05:52:16.780855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.614 [2024-12-09 05:52:16.780881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.615 [2024-12-09 05:52:16.792705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.615 [2024-12-09 05:52:16.792731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.615 [2024-12-09 05:52:16.803508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.615 [2024-12-09 05:52:16.803534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.615 [2024-12-09 05:52:16.818469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.615 [2024-12-09 05:52:16.818496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.615 [2024-12-09 05:52:16.828081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.615 [2024-12-09 05:52:16.828107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.842296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.842337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.852210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.852236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.867102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.867128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.883732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.883757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.899511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.899555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.915964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.915989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.931322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.931366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.949607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.949647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.959900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.959925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.975427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.975455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:16.993068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:16.993094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.002545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.002586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.014436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.014464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.030393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.030422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.040041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.040066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.055270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.055329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.074134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.074160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:22.872 [2024-12-09 05:52:17.084707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:22.872 [2024-12-09 05:52:17.084746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.099693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.099737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.117370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.117398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.126669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.126693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.138689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.138713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.154588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.154613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.163671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.163696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.178319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.178360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.187726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.187753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.202679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.202704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.212170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.212196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.226769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.226792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.236426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.236452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.251390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.251430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.269349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.269376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.278812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.278838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.294963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.294989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.314138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.314164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.323624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.323664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.130 [2024-12-09 05:52:17.340047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.130 [2024-12-09 05:52:17.340073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.354814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.354841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.364397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.364424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.378107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.378134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.387480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.387507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.403741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.403766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 11575.00 IOPS, 90.43 MiB/s [2024-12-09T04:52:17.613Z] [2024-12-09 05:52:17.418539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.418575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.428067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.428092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.442471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.442499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.452157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.452197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.466016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.466041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.475954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.475980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.491990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.492030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.507201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.507242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.516855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.516880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.528544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.528584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.539403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.539431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.554613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.554654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.564017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.564041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.577609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.577647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.587098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.587123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.388 [2024-12-09 05:52:17.598868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.388 [2024-12-09 05:52:17.598892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.615846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.615872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.631021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.631048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.640869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.640895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.652717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.652751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.667999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.668039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.682545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.682571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.691829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.691854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.705461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.705488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.715402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.715429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.730116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.730143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.739691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.739717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.754383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.754409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.764984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.765008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.777728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.777755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.787635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.787663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.802190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.802214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.811599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.811637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.826309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.826350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.836360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.836386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.850918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.850943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.646 [2024-12-09 05:52:17.867191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.646 [2024-12-09 05:52:17.867218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.884859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.884884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.895099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.895132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.906653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.906677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.917492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.917518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.928793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.928820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.940426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.940455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.955017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.955044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.964657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.964683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.976597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.976622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:17.987810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:17.987835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.003581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.003606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.019453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.019480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.037508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.037535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.047373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.047401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.059243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.059268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.075387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.904 [2024-12-09 05:52:18.075414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.904 [2024-12-09 05:52:18.093463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.905 [2024-12-09 05:52:18.093490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.905 [2024-12-09 05:52:18.103144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.905 [2024-12-09 05:52:18.103171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:23.905 [2024-12-09 05:52:18.115216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:23.905 [2024-12-09 05:52:18.115243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.131517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.131544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.146813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.146851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.156564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.156591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.170885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.170920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.188619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.188671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.198847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.198881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.211133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.211159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.227593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.227636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.245425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.245452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.254904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.254933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.267147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.267173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.282798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.282823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.292698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.292723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.304740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.304765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.319220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.319261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.328671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.328698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.340516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.340559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.351445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.351486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.366727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.366752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.163 [2024-12-09 05:52:18.375884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.163 [2024-12-09 05:52:18.375910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.389857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.389884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.400872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.400896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.411526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.411566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 11580.00 IOPS, 90.47 MiB/s [2024-12-09T04:52:18.647Z] [2024-12-09 05:52:18.427182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.427208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.436644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.436670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.448547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.448589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.464001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.464026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.478951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.478979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.488388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.488422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.502937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.502962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.513453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.513480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.524765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.524804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.535604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.535642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.551415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.551442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.569437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.569464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.580122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.580147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.595465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.595492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.611754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.611781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.627636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.627662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.422 [2024-12-09 05:52:18.645191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.422 [2024-12-09 05:52:18.645216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.654895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.654919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.666699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.666726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.681697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.681724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.691236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.691285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.707488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.707514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.724009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.724035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.738571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.738597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.748286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.748323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.762476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.762504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.772241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.772265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.787213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.787254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.801577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.801605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.810356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.810382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.826454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.826481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.835985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.836011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.847565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.847606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.860384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.860410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.874590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.874643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.884330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.884357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.681 [2024-12-09 05:52:18.898076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.681 [2024-12-09 05:52:18.898100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.907803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.907830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.923716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.923756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.939321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.939348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.949002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.949028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.960973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.960999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.972002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.972026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:18.987236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:18.987261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:19.005738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.939 [2024-12-09 05:52:19.005763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.939 [2024-12-09 05:52:19.015370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.015398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.027182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.027208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.043398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.043425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.059259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.059295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.069303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.069331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.081032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.081058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.091792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.091817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.106904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.106930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.125634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.125683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.135350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.135376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.146907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.146933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:24.940 [2024-12-09 05:52:19.160924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:24.940 [2024-12-09 05:52:19.160952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.170706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.170746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.182818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.182842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.198024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.198049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.207469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.207525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.219211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.219252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.234332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.234362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.244005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.244032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.258192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.258220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.267861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.267888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.282638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.282664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.299107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.299134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.308854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.308880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.320810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.320837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.331784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.331810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.347765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.347791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.363027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.198 [2024-12-09 05:52:19.363065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.198 [2024-12-09 05:52:19.372482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.199 [2024-12-09 05:52:19.372509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.199 [2024-12-09 05:52:19.386019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.199 [2024-12-09 05:52:19.386045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.199 [2024-12-09 05:52:19.395143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.199 [2024-12-09 05:52:19.395168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.199 [2024-12-09 05:52:19.406839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.199 [2024-12-09 05:52:19.406865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.457 11618.67 IOPS, 90.77 MiB/s [2024-12-09T04:52:19.682Z] [2024-12-09 05:52:19.423121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.457 [2024-12-09 05:52:19.423148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.457 [2024-12-09 05:52:19.433070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.457 [2024-12-09 05:52:19.433096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.457 [2024-12-09 05:52:19.445013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.457 [2024-12-09 05:52:19.445039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.457 [2024-12-09 05:52:19.455872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.457 [2024-12-09 05:52:19.455913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.457 [2024-12-09 05:52:19.468488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.468515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.483768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.483796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.498710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.498737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.508545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.508573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.522165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.522205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.532005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.532032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.545340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.545382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.554945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.554970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.570592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.570632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.579947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.579972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.594385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.594413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.603883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.603924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.618073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.618098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.627687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.627713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.643294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.643322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.661263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.661319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.458 [2024-12-09 05:52:19.670891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.458 [2024-12-09 05:52:19.670917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.686406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.686433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.695912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.695954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.707760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.707786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.722246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.722281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.732192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.732219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.747674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.747700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.765403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.765431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.775531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.775558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.789443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.789471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.798978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.799003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.814745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.814772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.824593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.824636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.836403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.836430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.850754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.850781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.870319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.870346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.880607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.880632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.896067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.896094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.910908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.910935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.920021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.920048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.716 [2024-12-09 05:52:19.936299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.716 [2024-12-09 05:52:19.936327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:19.950961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:19.950988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:19.961173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:19.961199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:19.972959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:19.972985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:19.983899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:19.983924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:19.998857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:19.998897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.018479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.018527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.027953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.027983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.044113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.044141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.059066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.059095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.069026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.069068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.080912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.080940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.092863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.092890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.104180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.104206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.120324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.120366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.130376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.130403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.142620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.142644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.157327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.157354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.167044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.167071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.178818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.178843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:25.973 [2024-12-09 05:52:20.195172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:25.973 [2024-12-09 05:52:20.195212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.204691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.204717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.216691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.216717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.231643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.231683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.247217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.247258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.265188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.265229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.275646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.275671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.291149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.291175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.309193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.309219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.319883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.319908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.334782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.334810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.344558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.344585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.358679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.358706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.230 [2024-12-09 05:52:20.368789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.230 [2024-12-09 05:52:20.368831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 [2024-12-09 05:52:20.381165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.381191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 [2024-12-09 05:52:20.392003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.392030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 [2024-12-09 05:52:20.407441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.407469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 11618.50 IOPS, 90.77 MiB/s [2024-12-09T04:52:20.456Z] [2024-12-09 05:52:20.425151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.425192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 [2024-12-09 05:52:20.435192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.435219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.231 [2024-12-09 05:52:20.446918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.231 [2024-12-09 05:52:20.446960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.461967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.461995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.471445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.471472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.486102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.486128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.496350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.496376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.512263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.512314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.528041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.528067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.543458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.543485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.561571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.561597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.572080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.572105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.587637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.587671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.605325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.605351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.615150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.615176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.628932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.628971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.639078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.639103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.650991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.651030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.665711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.665738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.675635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.675660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.690780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.690806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.488 [2024-12-09 05:52:20.700235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.488 [2024-12-09 05:52:20.700283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.714630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.714671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.723916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.723941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.737892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.737918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.747660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.747685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.763206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.763231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.772961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.772986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.785028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.785053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.795762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.795786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.810673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.810700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.819787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.819823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.834212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.834238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.844376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.844401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.858591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.858616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.868398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.868424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.882473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.882499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.891845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.891871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.907975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.908000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.923410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.923437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.939723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.939763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:26.746 [2024-12-09 05:52:20.955709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:26.746 [2024-12-09 05:52:20.955735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:20.971285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:20.971326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:20.981162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:20.981188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:20.992944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:20.992968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.003851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.003876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.017915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.017957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.027797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.027823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.042898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.042924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.052933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.052959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.064812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.064861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.076019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.076043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.092456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.092496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.102413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.102439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.114424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.114449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.130619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.130645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.140343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.140369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.155292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.155332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.165232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.165280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.177447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.177473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.187950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.187975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.203713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.203739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.003 [2024-12-09 05:52:21.221571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.003 [2024-12-09 05:52:21.221598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.232049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.232073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.245523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.245563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.255468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.255495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.267371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.267396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.283021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.283045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.292643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.292667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.304163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.304203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.314302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.314342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.326227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.326266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.259 [2024-12-09 05:52:21.336984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.259 [2024-12-09 05:52:21.337008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.348126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.348152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.361601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.361627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.371585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.371611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.383742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.383767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.399448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.399482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.409129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.409156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.420933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.420958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 11607.00 IOPS, 90.68 MiB/s [2024-12-09T04:52:21.485Z] [2024-12-09 05:52:21.430368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.430395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 00:57:27.260 Latency(us) 00:57:27.260 [2024-12-09T04:52:21.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:57:27.260 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:57:27.260 Nvme1n1 : 5.01 11611.64 90.72 0.00 0.00 11009.94 3252.53 18058.81 00:57:27.260 [2024-12-09T04:52:21.485Z] =================================================================================================================== 00:57:27.260 [2024-12-09T04:52:21.485Z] Total : 11611.64 90.72 0.00 0.00 11009.94 3252.53 18058.81 00:57:27.260 [2024-12-09 05:52:21.437374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.437399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.445372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.445396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.453355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.453378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.461406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.461457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.469408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.469460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.260 [2024-12-09 05:52:21.477404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.260 [2024-12-09 05:52:21.477448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.516 [2024-12-09 05:52:21.485404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.485450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.493401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.493449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.501403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.501452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.509400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.509448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.517400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.517450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.525402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.525449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.533402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.533448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.541408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.541458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.549406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.549456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.557402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.557448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.565406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.565450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.573407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.573452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.581402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.581437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.589368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.589390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.597372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.597396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.605371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.605393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.613353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.613389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.621420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.621466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.629406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.629451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.637366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.637390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.645358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.645380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.653360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.653382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.661359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.661380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.669359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.669380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.677358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.677379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.685363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.685384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.693359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.693380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 [2024-12-09 05:52:21.701355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:57:27.517 [2024-12-09 05:52:21.701375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (801397) - No such process 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 801397 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:27.517 delay0 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:27.517 05:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:57:27.775 [2024-12-09 05:52:21.831228] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:57:35.886 Initializing NVMe Controllers 00:57:35.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:57:35.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:57:35.886 Initialization complete. Launching workers. 00:57:35.886 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 19858 00:57:35.886 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19964, failed to submit 131 00:57:35.886 success 19886, unsuccessful 78, failed 0 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:35.886 05:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:35.886 rmmod nvme_tcp 00:57:35.886 rmmod nvme_fabrics 00:57:35.886 rmmod nvme_keyring 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 800178 ']' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 800178 ']' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 800178' 00:57:35.886 killing process with pid 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 800178 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:35.886 05:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:37.263 00:57:37.263 real 0m28.969s 00:57:37.263 user 0m40.805s 00:57:37.263 sys 0m10.574s 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:57:37.263 ************************************ 00:57:37.263 END TEST nvmf_zcopy 00:57:37.263 ************************************ 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:57:37.263 ************************************ 00:57:37.263 START TEST nvmf_nmic 00:57:37.263 ************************************ 00:57:37.263 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:57:37.521 * Looking for test storage... 00:57:37.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:57:37.521 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:57:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:37.522 --rc genhtml_branch_coverage=1 00:57:37.522 --rc genhtml_function_coverage=1 00:57:37.522 --rc genhtml_legend=1 00:57:37.522 --rc geninfo_all_blocks=1 00:57:37.522 --rc geninfo_unexecuted_blocks=1 00:57:37.522 00:57:37.522 ' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:57:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:37.522 --rc genhtml_branch_coverage=1 00:57:37.522 --rc genhtml_function_coverage=1 00:57:37.522 --rc genhtml_legend=1 00:57:37.522 --rc geninfo_all_blocks=1 00:57:37.522 --rc geninfo_unexecuted_blocks=1 00:57:37.522 00:57:37.522 ' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:57:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:37.522 --rc genhtml_branch_coverage=1 00:57:37.522 --rc genhtml_function_coverage=1 00:57:37.522 --rc genhtml_legend=1 00:57:37.522 --rc geninfo_all_blocks=1 00:57:37.522 --rc geninfo_unexecuted_blocks=1 00:57:37.522 00:57:37.522 ' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:57:37.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:37.522 --rc genhtml_branch_coverage=1 00:57:37.522 --rc genhtml_function_coverage=1 00:57:37.522 --rc genhtml_legend=1 00:57:37.522 --rc geninfo_all_blocks=1 00:57:37.522 --rc geninfo_unexecuted_blocks=1 00:57:37.522 00:57:37.522 ' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:57:37.522 05:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:57:40.047 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:57:40.047 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:40.047 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:57:40.048 Found net devices under 0000:0a:00.0: cvl_0_0 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:57:40.048 Found net devices under 0000:0a:00.1: cvl_0_1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:40.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:40.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:57:40.048 00:57:40.048 --- 10.0.0.2 ping statistics --- 00:57:40.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:40.048 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:40.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:40.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:57:40.048 00:57:40.048 --- 10.0.0.1 ping statistics --- 00:57:40.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:40.048 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=804888 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 804888 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 804888 ']' 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:40.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:40.048 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.048 [2024-12-09 05:52:34.049115] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:57:40.048 [2024-12-09 05:52:34.050299] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:57:40.048 [2024-12-09 05:52:34.050365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:40.048 [2024-12-09 05:52:34.123995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:40.048 [2024-12-09 05:52:34.185617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:40.048 [2024-12-09 05:52:34.185684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:40.048 [2024-12-09 05:52:34.185698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:40.048 [2024-12-09 05:52:34.185709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:40.048 [2024-12-09 05:52:34.185718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:40.048 [2024-12-09 05:52:34.187305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:40.048 [2024-12-09 05:52:34.187373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:40.048 [2024-12-09 05:52:34.187439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:40.048 [2024-12-09 05:52:34.187435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:40.306 [2024-12-09 05:52:34.283506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:57:40.306 [2024-12-09 05:52:34.283689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:57:40.306 [2024-12-09 05:52:34.283990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:57:40.306 [2024-12-09 05:52:34.284622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:57:40.307 [2024-12-09 05:52:34.284856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 [2024-12-09 05:52:34.336151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 Malloc0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 [2024-12-09 05:52:34.404394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:57:40.307 test case1: single bdev can't be used in multiple subsystems 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 [2024-12-09 05:52:34.428060] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:57:40.307 [2024-12-09 05:52:34.428089] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:57:40.307 [2024-12-09 05:52:34.428103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:57:40.307 request: 00:57:40.307 { 00:57:40.307 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:57:40.307 "namespace": { 00:57:40.307 "bdev_name": "Malloc0", 00:57:40.307 "no_auto_visible": false, 00:57:40.307 "hide_metadata": false 00:57:40.307 }, 00:57:40.307 "method": "nvmf_subsystem_add_ns", 00:57:40.307 "req_id": 1 00:57:40.307 } 00:57:40.307 Got JSON-RPC error response 00:57:40.307 response: 00:57:40.307 { 00:57:40.307 "code": -32602, 00:57:40.307 "message": "Invalid parameters" 00:57:40.307 } 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:57:40.307 Adding namespace failed - expected result. 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:57:40.307 test case2: host connect to nvmf target in multiple paths 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:40.307 [2024-12-09 05:52:34.436139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:40.307 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:57:40.565 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:57:40.823 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:57:40.823 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:57:40.823 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:40.823 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:40.823 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:57:42.719 05:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:57:42.976 [global] 00:57:42.976 thread=1 00:57:42.976 invalidate=1 00:57:42.976 rw=write 00:57:42.976 time_based=1 00:57:42.976 runtime=1 00:57:42.976 ioengine=libaio 00:57:42.976 direct=1 00:57:42.976 bs=4096 00:57:42.976 iodepth=1 00:57:42.976 norandommap=0 00:57:42.976 numjobs=1 00:57:42.976 00:57:42.976 verify_dump=1 00:57:42.976 verify_backlog=512 00:57:42.976 verify_state_save=0 00:57:42.976 do_verify=1 00:57:42.976 verify=crc32c-intel 00:57:42.976 [job0] 00:57:42.976 filename=/dev/nvme0n1 00:57:42.976 Could not set queue depth (nvme0n1) 00:57:42.976 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:42.976 fio-3.35 00:57:42.976 Starting 1 thread 00:57:44.345 00:57:44.345 job0: (groupid=0, jobs=1): err= 0: pid=805389: Mon Dec 9 05:52:38 2024 00:57:44.345 read: IOPS=1321, BW=5288KiB/s (5414kB/s)(5388KiB/1019msec) 00:57:44.345 slat (nsec): min=5468, max=85228, avg=14852.88, stdev=6545.46 00:57:44.345 clat (usec): min=168, max=42349, avg=511.48, stdev=3201.76 00:57:44.345 lat (usec): min=174, max=42366, avg=526.33, stdev=3201.66 00:57:44.345 clat percentiles (usec): 00:57:44.345 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 219], 20.00th=[ 243], 00:57:44.345 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:57:44.345 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 445], 00:57:44.345 | 99.00th=[ 660], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:57:44.345 | 99.99th=[42206] 00:57:44.345 write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets 00:57:44.345 slat (nsec): min=6933, max=54634, avg=13995.85, stdev=6414.77 00:57:44.345 clat (usec): min=135, max=332, avg=178.97, stdev=24.25 00:57:44.345 lat (usec): min=143, max=354, avg=192.97, stdev=27.75 00:57:44.345 clat percentiles (usec): 00:57:44.345 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:57:44.345 | 30.00th=[ 159], 40.00th=[ 172], 50.00th=[ 186], 60.00th=[ 188], 00:57:44.345 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 204], 95.00th=[ 208], 00:57:44.345 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 334], 00:57:44.345 | 99.99th=[ 334] 00:57:44.345 bw ( KiB/s): min= 3072, max= 9216, per=100.00%, avg=6144.00, stdev=4344.46, samples=2 00:57:44.345 iops : min= 768, max= 2304, avg=1536.00, stdev=1086.12, samples=2 00:57:44.345 lat (usec) : 250=71.83%, 500=26.74%, 750=1.14% 00:57:44.345 lat (msec) : 50=0.28% 00:57:44.345 cpu : usr=2.75%, sys=5.99%, ctx=2883, majf=0, minf=1 00:57:44.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:44.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:44.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:44.345 issued rwts: total=1347,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:44.345 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:44.345 00:57:44.345 Run status group 0 (all jobs): 00:57:44.345 READ: bw=5288KiB/s (5414kB/s), 5288KiB/s-5288KiB/s (5414kB/s-5414kB/s), io=5388KiB (5517kB), run=1019-1019msec 00:57:44.345 WRITE: bw=6029KiB/s (6174kB/s), 6029KiB/s-6029KiB/s (6174kB/s-6174kB/s), io=6144KiB (6291kB), run=1019-1019msec 00:57:44.345 00:57:44.345 Disk stats (read/write): 00:57:44.345 nvme0n1: ios=1269/1536, merge=0/0, ticks=581/263, in_queue=844, util=91.38% 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:44.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:44.345 rmmod nvme_tcp 00:57:44.345 rmmod nvme_fabrics 00:57:44.345 rmmod nvme_keyring 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 804888 ']' 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 804888 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 804888 ']' 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 804888 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804888 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804888' 00:57:44.345 killing process with pid 804888 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 804888 00:57:44.345 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 804888 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:44.604 05:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:57:47.135 00:57:47.135 real 0m9.384s 00:57:47.135 user 0m17.210s 00:57:47.135 sys 0m3.654s 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:57:47.135 ************************************ 00:57:47.135 END TEST nvmf_nmic 00:57:47.135 ************************************ 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:57:47.135 ************************************ 00:57:47.135 START TEST nvmf_fio_target 00:57:47.135 ************************************ 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:57:47.135 * Looking for test storage... 00:57:47.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:57:47.135 05:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:57:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.135 --rc genhtml_branch_coverage=1 00:57:47.135 --rc genhtml_function_coverage=1 00:57:47.135 --rc genhtml_legend=1 00:57:47.135 --rc geninfo_all_blocks=1 00:57:47.135 --rc geninfo_unexecuted_blocks=1 00:57:47.135 00:57:47.135 ' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:57:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.135 --rc genhtml_branch_coverage=1 00:57:47.135 --rc genhtml_function_coverage=1 00:57:47.135 --rc genhtml_legend=1 00:57:47.135 --rc geninfo_all_blocks=1 00:57:47.135 --rc geninfo_unexecuted_blocks=1 00:57:47.135 00:57:47.135 ' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:57:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.135 --rc genhtml_branch_coverage=1 00:57:47.135 --rc genhtml_function_coverage=1 00:57:47.135 --rc genhtml_legend=1 00:57:47.135 --rc geninfo_all_blocks=1 00:57:47.135 --rc geninfo_unexecuted_blocks=1 00:57:47.135 00:57:47.135 ' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:57:47.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:47.135 --rc genhtml_branch_coverage=1 00:57:47.135 --rc genhtml_function_coverage=1 00:57:47.135 --rc genhtml_legend=1 00:57:47.135 --rc geninfo_all_blocks=1 00:57:47.135 --rc geninfo_unexecuted_blocks=1 00:57:47.135 00:57:47.135 ' 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:47.135 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:57:47.136 05:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:57:49.072 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:57:49.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:57:49.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:57:49.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:57:49.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:57:49.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:49.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:57:49.073 00:57:49.073 --- 10.0.0.2 ping statistics --- 00:57:49.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:49.073 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:57:49.073 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:49.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:49.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:57:49.073 00:57:49.074 --- 10.0.0.1 ping statistics --- 00:57:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:49.074 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:49.074 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=807467 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 807467 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 807467 ']' 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:49.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:49.364 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:49.365 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:49.365 [2024-12-09 05:52:43.343569] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:57:49.365 [2024-12-09 05:52:43.344637] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:57:49.365 [2024-12-09 05:52:43.344689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:49.365 [2024-12-09 05:52:43.419770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:49.365 [2024-12-09 05:52:43.482303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:49.365 [2024-12-09 05:52:43.482362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:49.365 [2024-12-09 05:52:43.482378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:49.365 [2024-12-09 05:52:43.482389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:49.365 [2024-12-09 05:52:43.482400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:49.365 [2024-12-09 05:52:43.483958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:49.365 [2024-12-09 05:52:43.484017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:49.365 [2024-12-09 05:52:43.484090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:49.365 [2024-12-09 05:52:43.484094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:49.365 [2024-12-09 05:52:43.573927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:57:49.365 [2024-12-09 05:52:43.574125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:57:49.365 [2024-12-09 05:52:43.574458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:57:49.365 [2024-12-09 05:52:43.575091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:57:49.365 [2024-12-09 05:52:43.575366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:49.627 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:57:49.886 [2024-12-09 05:52:43.884824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:49.886 05:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:50.144 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:57:50.144 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:50.403 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:57:50.403 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:50.661 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:57:50.661 05:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:50.919 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:57:50.919 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:57:51.177 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:51.742 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:57:51.742 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:51.742 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:57:51.742 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:57:52.308 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:57:52.308 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:57:52.308 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:52.566 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:57:52.566 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:52.824 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:57:52.824 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.390 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:57:53.390 [2024-12-09 05:52:47.560964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:57:53.390 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:57:53.648 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:57:53.904 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:57:54.162 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:57:56.684 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:57:56.684 [global] 00:57:56.684 thread=1 00:57:56.684 invalidate=1 00:57:56.684 rw=write 00:57:56.684 time_based=1 00:57:56.684 runtime=1 00:57:56.684 ioengine=libaio 00:57:56.684 direct=1 00:57:56.684 bs=4096 00:57:56.684 iodepth=1 00:57:56.684 norandommap=0 00:57:56.684 numjobs=1 00:57:56.684 00:57:56.684 verify_dump=1 00:57:56.684 verify_backlog=512 00:57:56.684 verify_state_save=0 00:57:56.684 do_verify=1 00:57:56.684 verify=crc32c-intel 00:57:56.684 [job0] 00:57:56.684 filename=/dev/nvme0n1 00:57:56.684 [job1] 00:57:56.684 filename=/dev/nvme0n2 00:57:56.684 [job2] 00:57:56.684 filename=/dev/nvme0n3 00:57:56.684 [job3] 00:57:56.684 filename=/dev/nvme0n4 00:57:56.684 Could not set queue depth (nvme0n1) 00:57:56.684 Could not set queue depth (nvme0n2) 00:57:56.684 Could not set queue depth (nvme0n3) 00:57:56.684 Could not set queue depth (nvme0n4) 00:57:56.684 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:56.684 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:56.684 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:56.684 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:56.684 fio-3.35 00:57:56.684 Starting 4 threads 00:57:57.614 00:57:57.614 job0: (groupid=0, jobs=1): err= 0: pid=808527: Mon Dec 9 05:52:51 2024 00:57:57.614 read: IOPS=674, BW=2697KiB/s (2762kB/s)(2700KiB/1001msec) 00:57:57.614 slat (nsec): min=5326, max=32466, avg=7821.15, stdev=4492.34 00:57:57.614 clat (usec): min=190, max=41203, avg=1169.49, stdev=6001.79 00:57:57.614 lat (usec): min=196, max=41208, avg=1177.31, stdev=6002.70 00:57:57.614 clat percentiles (usec): 00:57:57.614 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:57:57.614 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 251], 00:57:57.614 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 326], 95.00th=[ 453], 00:57:57.614 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:57:57.614 | 99.99th=[41157] 00:57:57.614 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:57:57.614 slat (nsec): min=6803, max=43104, avg=8490.44, stdev=2566.03 00:57:57.614 clat (usec): min=157, max=461, avg=188.04, stdev=23.08 00:57:57.614 lat (usec): min=165, max=470, avg=196.53, stdev=23.51 00:57:57.614 clat percentiles (usec): 00:57:57.614 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:57:57.614 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:57:57.614 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 221], 00:57:57.614 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 429], 99.95th=[ 461], 00:57:57.614 | 99.99th=[ 461] 00:57:57.614 bw ( KiB/s): min= 4096, max= 4096, per=22.44%, avg=4096.00, stdev= 0.00, samples=1 00:57:57.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:57:57.614 lat (usec) : 250=78.99%, 500=19.48%, 750=0.65% 00:57:57.614 lat (msec) : 50=0.88% 00:57:57.614 cpu : usr=1.00%, sys=1.90%, ctx=1700, majf=0, minf=2 00:57:57.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:57.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.614 issued rwts: total=675,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:57.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:57.614 job1: (groupid=0, jobs=1): err= 0: pid=808528: Mon Dec 9 05:52:51 2024 00:57:57.614 read: IOPS=660, BW=2642KiB/s (2705kB/s)(2668KiB/1010msec) 00:57:57.614 slat (nsec): min=5450, max=34644, avg=6964.27, stdev=2895.62 00:57:57.614 clat (usec): min=180, max=41191, avg=1179.95, stdev=6036.82 00:57:57.614 lat (usec): min=186, max=41197, avg=1186.91, stdev=6037.92 00:57:57.614 clat percentiles (usec): 00:57:57.614 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 233], 00:57:57.614 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:57:57.614 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 334], 95.00th=[ 537], 00:57:57.614 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:57:57.614 | 99.99th=[41157] 00:57:57.614 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:57:57.614 slat (nsec): min=7390, max=29763, avg=8695.86, stdev=1931.97 00:57:57.614 clat (usec): min=137, max=313, avg=198.04, stdev=36.83 00:57:57.614 lat (usec): min=145, max=323, avg=206.73, stdev=37.20 00:57:57.614 clat percentiles (usec): 00:57:57.614 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:57:57.614 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 192], 60.00th=[ 223], 00:57:57.614 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:57:57.614 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 314], 00:57:57.614 | 99.99th=[ 314] 00:57:57.614 bw ( KiB/s): min= 8192, max= 8192, per=44.89%, avg=8192.00, stdev= 0.00, samples=1 00:57:57.614 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:57:57.614 lat (usec) : 250=80.37%, 500=17.21%, 750=1.54% 00:57:57.614 lat (msec) : 50=0.89% 00:57:57.614 cpu : usr=1.09%, sys=1.68%, ctx=1693, majf=0, minf=1 00:57:57.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:57.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.614 issued rwts: total=667,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:57.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:57.614 job2: (groupid=0, jobs=1): err= 0: pid=808531: Mon Dec 9 05:52:51 2024 00:57:57.614 read: IOPS=2041, BW=8168KiB/s (8364kB/s)(8176KiB/1001msec) 00:57:57.614 slat (nsec): min=5894, max=52161, avg=9239.49, stdev=5243.29 00:57:57.615 clat (usec): min=224, max=1005, avg=268.77, stdev=33.28 00:57:57.615 lat (usec): min=231, max=1023, avg=278.01, stdev=35.45 00:57:57.615 clat percentiles (usec): 00:57:57.615 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:57:57.615 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:57:57.615 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:57:57.615 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 791], 99.95th=[ 930], 00:57:57.615 | 99.99th=[ 1004] 00:57:57.615 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:57:57.615 slat (usec): min=7, max=1226, avg=12.25, stdev=27.57 00:57:57.615 clat (usec): min=165, max=1289, avg=191.78, stdev=37.08 00:57:57.615 lat (usec): min=173, max=1435, avg=204.03, stdev=46.98 00:57:57.615 clat percentiles (usec): 00:57:57.615 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:57:57.615 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:57:57.615 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 225], 00:57:57.615 | 99.00th=[ 273], 99.50th=[ 318], 99.90th=[ 644], 99.95th=[ 807], 00:57:57.615 | 99.99th=[ 1287] 00:57:57.615 bw ( KiB/s): min= 8488, max= 8488, per=46.51%, avg=8488.00, stdev= 0.00, samples=1 00:57:57.615 iops : min= 2122, max= 2122, avg=2122.00, stdev= 0.00, samples=1 00:57:57.615 lat (usec) : 250=58.60%, 500=41.18%, 750=0.10%, 1000=0.07% 00:57:57.615 lat (msec) : 2=0.05% 00:57:57.615 cpu : usr=2.30%, sys=6.50%, ctx=4095, majf=0, minf=1 00:57:57.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:57.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.615 issued rwts: total=2044,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:57.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:57.615 job3: (groupid=0, jobs=1): err= 0: pid=808532: Mon Dec 9 05:52:51 2024 00:57:57.615 read: IOPS=38, BW=155KiB/s (159kB/s)(156KiB/1005msec) 00:57:57.615 slat (nsec): min=6692, max=34905, avg=12861.77, stdev=5734.61 00:57:57.615 clat (usec): min=262, max=41357, avg=22459.16, stdev=20116.01 00:57:57.615 lat (usec): min=281, max=41375, avg=22472.02, stdev=20118.99 00:57:57.615 clat percentiles (usec): 00:57:57.615 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:57:57.615 | 30.00th=[ 375], 40.00th=[ 437], 50.00th=[40633], 60.00th=[41157], 00:57:57.615 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:57:57.615 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:57:57.615 | 99.99th=[41157] 00:57:57.615 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:57:57.615 slat (nsec): min=7378, max=34664, avg=9678.56, stdev=2642.76 00:57:57.615 clat (usec): min=182, max=396, avg=233.76, stdev=18.73 00:57:57.615 lat (usec): min=190, max=404, avg=243.44, stdev=19.11 00:57:57.615 clat percentiles (usec): 00:57:57.615 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:57:57.615 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 00:57:57.615 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 255], 00:57:57.615 | 99.00th=[ 306], 99.50th=[ 351], 99.90th=[ 396], 99.95th=[ 396], 00:57:57.615 | 99.99th=[ 396] 00:57:57.615 bw ( KiB/s): min= 4096, max= 4096, per=22.44%, avg=4096.00, stdev= 0.00, samples=1 00:57:57.615 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:57:57.615 lat (usec) : 250=83.85%, 500=12.16% 00:57:57.615 lat (msec) : 20=0.18%, 50=3.81% 00:57:57.615 cpu : usr=0.50%, sys=0.40%, ctx=554, majf=0, minf=1 00:57:57.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:57.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:57.615 issued rwts: total=39,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:57.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:57.615 00:57:57.615 Run status group 0 (all jobs): 00:57:57.615 READ: bw=13.2MiB/s (13.9MB/s), 155KiB/s-8168KiB/s (159kB/s-8364kB/s), io=13.4MiB (14.0MB), run=1001-1010msec 00:57:57.615 WRITE: bw=17.8MiB/s (18.7MB/s), 2038KiB/s-8184KiB/s (2087kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1010msec 00:57:57.615 00:57:57.615 Disk stats (read/write): 00:57:57.615 nvme0n1: ios=637/1024, merge=0/0, ticks=659/184, in_queue=843, util=86.47% 00:57:57.615 nvme0n2: ios=686/1024, merge=0/0, ticks=1595/194, in_queue=1789, util=97.76% 00:57:57.615 nvme0n3: ios=1618/2048, merge=0/0, ticks=634/363, in_queue=997, util=97.70% 00:57:57.615 nvme0n4: ios=62/512, merge=0/0, ticks=1664/108, in_queue=1772, util=97.78% 00:57:57.615 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:57:57.615 [global] 00:57:57.615 thread=1 00:57:57.615 invalidate=1 00:57:57.615 rw=randwrite 00:57:57.615 time_based=1 00:57:57.615 runtime=1 00:57:57.615 ioengine=libaio 00:57:57.615 direct=1 00:57:57.615 bs=4096 00:57:57.615 iodepth=1 00:57:57.615 norandommap=0 00:57:57.615 numjobs=1 00:57:57.615 00:57:57.615 verify_dump=1 00:57:57.615 verify_backlog=512 00:57:57.615 verify_state_save=0 00:57:57.615 do_verify=1 00:57:57.615 verify=crc32c-intel 00:57:57.615 [job0] 00:57:57.615 filename=/dev/nvme0n1 00:57:57.615 [job1] 00:57:57.615 filename=/dev/nvme0n2 00:57:57.615 [job2] 00:57:57.615 filename=/dev/nvme0n3 00:57:57.615 [job3] 00:57:57.615 filename=/dev/nvme0n4 00:57:57.615 Could not set queue depth (nvme0n1) 00:57:57.615 Could not set queue depth (nvme0n2) 00:57:57.615 Could not set queue depth (nvme0n3) 00:57:57.615 Could not set queue depth (nvme0n4) 00:57:57.872 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:57.872 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:57.872 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:57.872 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:57:57.872 fio-3.35 00:57:57.872 Starting 4 threads 00:57:59.243 00:57:59.243 job0: (groupid=0, jobs=1): err= 0: pid=808757: Mon Dec 9 05:52:53 2024 00:57:59.243 read: IOPS=2127, BW=8511KiB/s (8716kB/s)(8520KiB/1001msec) 00:57:59.243 slat (nsec): min=4140, max=64393, avg=7140.95, stdev=5508.61 00:57:59.243 clat (usec): min=190, max=600, avg=236.73, stdev=42.27 00:57:59.243 lat (usec): min=201, max=604, avg=243.87, stdev=45.67 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:57:59.243 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:57:59.243 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 297], 00:57:59.243 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 523], 99.95th=[ 545], 00:57:59.243 | 99.99th=[ 603] 00:57:59.243 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:57:59.243 slat (nsec): min=5665, max=33096, avg=8456.82, stdev=4142.19 00:57:59.243 clat (usec): min=140, max=1057, avg=175.20, stdev=34.71 00:57:59.243 lat (usec): min=153, max=1064, avg=183.66, stdev=35.41 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:57:59.243 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:57:59.243 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:57:59.243 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 758], 99.95th=[ 889], 00:57:59.243 | 99.99th=[ 1057] 00:57:59.243 bw ( KiB/s): min= 9696, max= 9696, per=39.81%, avg=9696.00, stdev= 0.00, samples=1 00:57:59.243 iops : min= 2424, max= 2424, avg=2424.00, stdev= 0.00, samples=1 00:57:59.243 lat (usec) : 250=94.07%, 500=5.69%, 750=0.17%, 1000=0.04% 00:57:59.243 lat (msec) : 2=0.02% 00:57:59.243 cpu : usr=2.80%, sys=2.90%, ctx=4692, majf=0, minf=1 00:57:59.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:59.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 issued rwts: total=2130,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:59.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:59.243 job1: (groupid=0, jobs=1): err= 0: pid=808758: Mon Dec 9 05:52:53 2024 00:57:59.243 read: IOPS=40, BW=163KiB/s (166kB/s)(164KiB/1009msec) 00:57:59.243 slat (nsec): min=6238, max=30641, avg=13463.17, stdev=4462.13 00:57:59.243 clat (usec): min=275, max=41367, avg=21174.26, stdev=20576.19 00:57:59.243 lat (usec): min=282, max=41380, avg=21187.72, stdev=20574.62 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:57:59.243 | 30.00th=[ 310], 40.00th=[ 383], 50.00th=[40633], 60.00th=[41157], 00:57:59.243 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:57:59.243 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:57:59.243 | 99.99th=[41157] 00:57:59.243 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:57:59.243 slat (nsec): min=7559, max=59696, avg=17504.95, stdev=7380.97 00:57:59.243 clat (usec): min=194, max=397, avg=249.82, stdev=26.52 00:57:59.243 lat (usec): min=216, max=441, avg=267.33, stdev=24.68 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 229], 00:57:59.243 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:57:59.243 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:57:59.243 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 396], 99.95th=[ 396], 00:57:59.243 | 99.99th=[ 396] 00:57:59.243 bw ( KiB/s): min= 4096, max= 4096, per=16.82%, avg=4096.00, stdev= 0.00, samples=1 00:57:59.243 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:57:59.243 lat (usec) : 250=51.18%, 500=44.67%, 750=0.36% 00:57:59.243 lat (msec) : 50=3.80% 00:57:59.243 cpu : usr=0.89%, sys=0.89%, ctx=553, majf=0, minf=2 00:57:59.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:59.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:59.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:59.243 job2: (groupid=0, jobs=1): err= 0: pid=808759: Mon Dec 9 05:52:53 2024 00:57:59.243 read: IOPS=2070, BW=8284KiB/s (8483kB/s)(8292KiB/1001msec) 00:57:59.243 slat (nsec): min=4297, max=67573, avg=7520.00, stdev=5169.19 00:57:59.243 clat (usec): min=207, max=580, avg=244.71, stdev=46.30 00:57:59.243 lat (usec): min=212, max=632, avg=252.23, stdev=49.29 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 217], 20.00th=[ 221], 00:57:59.243 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:57:59.243 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 293], 00:57:59.243 | 99.00th=[ 482], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 562], 00:57:59.243 | 99.99th=[ 578] 00:57:59.243 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:57:59.243 slat (nsec): min=5615, max=39170, avg=8496.06, stdev=4100.69 00:57:59.243 clat (usec): min=136, max=518, avg=173.94, stdev=25.03 00:57:59.243 lat (usec): min=143, max=525, avg=182.44, stdev=26.19 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:57:59.243 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:57:59.243 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:57:59.243 | 99.00th=[ 253], 99.50th=[ 277], 99.90th=[ 429], 99.95th=[ 445], 00:57:59.243 | 99.99th=[ 519] 00:57:59.243 bw ( KiB/s): min= 9056, max= 9056, per=37.18%, avg=9056.00, stdev= 0.00, samples=1 00:57:59.243 iops : min= 2264, max= 2264, avg=2264.00, stdev= 0.00, samples=1 00:57:59.243 lat (usec) : 250=89.36%, 500=10.25%, 750=0.39% 00:57:59.243 cpu : usr=2.20%, sys=3.60%, ctx=4636, majf=0, minf=1 00:57:59.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:59.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 issued rwts: total=2073,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:59.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:59.243 job3: (groupid=0, jobs=1): err= 0: pid=808760: Mon Dec 9 05:52:53 2024 00:57:59.243 read: IOPS=34, BW=139KiB/s (143kB/s)(140KiB/1006msec) 00:57:59.243 slat (nsec): min=6817, max=35433, avg=15334.83, stdev=7162.05 00:57:59.243 clat (usec): min=286, max=41332, avg=24732.82, stdev=20159.29 00:57:59.243 lat (usec): min=301, max=41349, avg=24748.16, stdev=20161.70 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 388], 00:57:59.243 | 30.00th=[ 437], 40.00th=[ 537], 50.00th=[41157], 60.00th=[41157], 00:57:59.243 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:57:59.243 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:57:59.243 | 99.99th=[41157] 00:57:59.243 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:57:59.243 slat (nsec): min=6782, max=42671, avg=16170.93, stdev=6784.44 00:57:59.243 clat (usec): min=178, max=380, avg=251.22, stdev=29.03 00:57:59.243 lat (usec): min=197, max=388, avg=267.39, stdev=26.25 00:57:59.243 clat percentiles (usec): 00:57:59.243 | 1.00th=[ 190], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 233], 00:57:59.243 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:57:59.243 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 306], 00:57:59.243 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 383], 99.95th=[ 383], 00:57:59.243 | 99.99th=[ 383] 00:57:59.243 bw ( KiB/s): min= 4096, max= 4096, per=16.82%, avg=4096.00, stdev= 0.00, samples=1 00:57:59.243 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:57:59.243 lat (usec) : 250=51.19%, 500=44.61%, 750=0.37% 00:57:59.243 lat (msec) : 50=3.84% 00:57:59.243 cpu : usr=0.60%, sys=0.60%, ctx=548, majf=0, minf=1 00:57:59.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:57:59.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:57:59.243 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:57:59.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:57:59.243 00:57:59.243 Run status group 0 (all jobs): 00:57:59.243 READ: bw=16.6MiB/s (17.4MB/s), 139KiB/s-8511KiB/s (143kB/s-8716kB/s), io=16.7MiB (17.5MB), run=1001-1009msec 00:57:59.243 WRITE: bw=23.8MiB/s (24.9MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1009msec 00:57:59.243 00:57:59.243 Disk stats (read/write): 00:57:59.243 nvme0n1: ios=1935/2048, merge=0/0, ticks=1089/344, in_queue=1433, util=90.28% 00:57:59.243 nvme0n2: ios=87/512, merge=0/0, ticks=757/120, in_queue=877, util=91.18% 00:57:59.243 nvme0n3: ios=1873/2048, merge=0/0, ticks=890/348, in_queue=1238, util=97.30% 00:57:59.243 nvme0n4: ios=89/512, merge=0/0, ticks=1038/119, in_queue=1157, util=100.00% 00:57:59.243 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:57:59.243 [global] 00:57:59.243 thread=1 00:57:59.243 invalidate=1 00:57:59.243 rw=write 00:57:59.243 time_based=1 00:57:59.243 runtime=1 00:57:59.243 ioengine=libaio 00:57:59.243 direct=1 00:57:59.243 bs=4096 00:57:59.243 iodepth=128 00:57:59.243 norandommap=0 00:57:59.243 numjobs=1 00:57:59.243 00:57:59.243 verify_dump=1 00:57:59.243 verify_backlog=512 00:57:59.243 verify_state_save=0 00:57:59.243 do_verify=1 00:57:59.243 verify=crc32c-intel 00:57:59.243 [job0] 00:57:59.243 filename=/dev/nvme0n1 00:57:59.243 [job1] 00:57:59.243 filename=/dev/nvme0n2 00:57:59.243 [job2] 00:57:59.244 filename=/dev/nvme0n3 00:57:59.244 [job3] 00:57:59.244 filename=/dev/nvme0n4 00:57:59.244 Could not set queue depth (nvme0n1) 00:57:59.244 Could not set queue depth (nvme0n2) 00:57:59.244 Could not set queue depth (nvme0n3) 00:57:59.244 Could not set queue depth (nvme0n4) 00:57:59.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:59.244 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:59.244 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:59.244 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:57:59.244 fio-3.35 00:57:59.244 Starting 4 threads 00:58:00.616 00:58:00.616 job0: (groupid=0, jobs=1): err= 0: pid=808989: Mon Dec 9 05:52:54 2024 00:58:00.616 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:58:00.616 slat (usec): min=2, max=18198, avg=101.46, stdev=744.03 00:58:00.616 clat (usec): min=6322, max=44817, avg=13699.23, stdev=4388.18 00:58:00.616 lat (usec): min=6332, max=45009, avg=13800.69, stdev=4444.15 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 6915], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:58:00.616 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:58:00.616 | 70.00th=[14222], 80.00th=[16057], 90.00th=[18482], 95.00th=[22676], 00:58:00.616 | 99.00th=[25035], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:58:00.616 | 99.99th=[44827] 00:58:00.616 write: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1007msec); 0 zone resets 00:58:00.616 slat (usec): min=3, max=35364, avg=130.38, stdev=963.09 00:58:00.616 clat (usec): min=1362, max=96691, avg=15903.35, stdev=14477.81 00:58:00.616 lat (usec): min=1414, max=96723, avg=16033.73, stdev=14571.51 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 6259], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10552], 00:58:00.616 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:58:00.616 | 70.00th=[13042], 80.00th=[13698], 90.00th=[21103], 95.00th=[41157], 00:58:00.616 | 99.00th=[91751], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:58:00.616 | 99.99th=[96994] 00:58:00.616 bw ( KiB/s): min=15024, max=17744, per=28.10%, avg=16384.00, stdev=1923.33, samples=2 00:58:00.616 iops : min= 3756, max= 4436, avg=4096.00, stdev=480.83, samples=2 00:58:00.616 lat (msec) : 2=0.06%, 10=17.24%, 20=72.53%, 50=8.09%, 100=2.07% 00:58:00.616 cpu : usr=3.88%, sys=7.65%, ctx=299, majf=0, minf=1 00:58:00.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:58:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:00.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:00.616 issued rwts: total=4096,4194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:00.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:00.616 job1: (groupid=0, jobs=1): err= 0: pid=808991: Mon Dec 9 05:52:54 2024 00:58:00.616 read: IOPS=3076, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1011msec) 00:58:00.616 slat (usec): min=2, max=18999, avg=151.68, stdev=1089.91 00:58:00.616 clat (usec): min=5341, max=55861, avg=19021.35, stdev=9351.37 00:58:00.616 lat (usec): min=5346, max=55901, avg=19173.02, stdev=9455.08 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 8455], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10290], 00:58:00.616 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15139], 60.00th=[17957], 00:58:00.616 | 70.00th=[23462], 80.00th=[28181], 90.00th=[32637], 95.00th=[38536], 00:58:00.616 | 99.00th=[43779], 99.50th=[46924], 99.90th=[48497], 99.95th=[52691], 00:58:00.616 | 99.99th=[55837] 00:58:00.616 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:58:00.616 slat (usec): min=3, max=21962, avg=135.54, stdev=970.31 00:58:00.616 clat (usec): min=3844, max=62002, avg=19212.68, stdev=8607.94 00:58:00.616 lat (usec): min=3851, max=62035, avg=19348.22, stdev=8703.65 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 6587], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[12780], 00:58:00.616 | 30.00th=[14353], 40.00th=[15008], 50.00th=[18220], 60.00th=[20317], 00:58:00.616 | 70.00th=[22414], 80.00th=[24773], 90.00th=[30278], 95.00th=[35390], 00:58:00.616 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[57934], 00:58:00.616 | 99.99th=[62129] 00:58:00.616 bw ( KiB/s): min=11776, max=16184, per=23.98%, avg=13980.00, stdev=3116.93, samples=2 00:58:00.616 iops : min= 2944, max= 4046, avg=3495.00, stdev=779.23, samples=2 00:58:00.616 lat (msec) : 4=0.18%, 10=14.34%, 20=47.21%, 50=37.57%, 100=0.70% 00:58:00.616 cpu : usr=3.76%, sys=7.33%, ctx=191, majf=0, minf=1 00:58:00.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:58:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:00.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:00.616 issued rwts: total=3110,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:00.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:00.616 job2: (groupid=0, jobs=1): err= 0: pid=808992: Mon Dec 9 05:52:54 2024 00:58:00.616 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:58:00.616 slat (usec): min=2, max=16895, avg=195.92, stdev=1202.22 00:58:00.616 clat (usec): min=12486, max=69890, avg=24926.38, stdev=6805.10 00:58:00.616 lat (usec): min=13565, max=71178, avg=25122.30, stdev=6885.66 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[13960], 5.00th=[14746], 10.00th=[17695], 20.00th=[19006], 00:58:00.616 | 30.00th=[20841], 40.00th=[23200], 50.00th=[24773], 60.00th=[25822], 00:58:00.616 | 70.00th=[27395], 80.00th=[29492], 90.00th=[32637], 95.00th=[35390], 00:58:00.616 | 99.00th=[42730], 99.50th=[44303], 99.90th=[69731], 99.95th=[69731], 00:58:00.616 | 99.99th=[69731] 00:58:00.616 write: IOPS=2336, BW=9348KiB/s (9572kB/s)(9404KiB/1006msec); 0 zone resets 00:58:00.616 slat (usec): min=3, max=15392, avg=233.97, stdev=1236.06 00:58:00.616 clat (msec): min=5, max=122, avg=32.38, stdev=22.08 00:58:00.616 lat (msec): min=6, max=122, avg=32.62, stdev=22.21 00:58:00.616 clat percentiles (msec): 00:58:00.616 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 19], 00:58:00.616 | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 28], 00:58:00.616 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 56], 95.00th=[ 86], 00:58:00.616 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:58:00.616 | 99.99th=[ 124] 00:58:00.616 bw ( KiB/s): min= 8208, max= 9600, per=15.27%, avg=8904.00, stdev=984.29, samples=2 00:58:00.616 iops : min= 2052, max= 2400, avg=2226.00, stdev=246.07, samples=2 00:58:00.616 lat (msec) : 10=0.20%, 20=25.37%, 50=67.02%, 100=5.43%, 250=1.98% 00:58:00.616 cpu : usr=2.39%, sys=3.88%, ctx=203, majf=0, minf=1 00:58:00.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:58:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:00.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:00.616 issued rwts: total=2048,2351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:00.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:00.616 job3: (groupid=0, jobs=1): err= 0: pid=808993: Mon Dec 9 05:52:54 2024 00:58:00.616 read: IOPS=4228, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1003msec) 00:58:00.616 slat (usec): min=2, max=12608, avg=115.55, stdev=736.27 00:58:00.616 clat (usec): min=1830, max=41847, avg=14640.35, stdev=5342.41 00:58:00.616 lat (usec): min=4473, max=41859, avg=14755.90, stdev=5391.09 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 6521], 5.00th=[10290], 10.00th=[10814], 20.00th=[11469], 00:58:00.616 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:58:00.616 | 70.00th=[14091], 80.00th=[15401], 90.00th=[23462], 95.00th=[27132], 00:58:00.616 | 99.00th=[33424], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:58:00.616 | 99.99th=[41681] 00:58:00.616 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:58:00.616 slat (usec): min=3, max=18145, avg=103.28, stdev=738.04 00:58:00.616 clat (usec): min=758, max=56105, avg=14123.50, stdev=6730.89 00:58:00.616 lat (usec): min=770, max=56127, avg=14226.78, stdev=6794.49 00:58:00.616 clat percentiles (usec): 00:58:00.616 | 1.00th=[ 7373], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:58:00.616 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:58:00.616 | 70.00th=[13829], 80.00th=[14353], 90.00th=[17695], 95.00th=[35390], 00:58:00.616 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46924], 99.95th=[49021], 00:58:00.616 | 99.99th=[56361] 00:58:00.616 bw ( KiB/s): min=16384, max=20480, per=31.61%, avg=18432.00, stdev=2896.31, samples=2 00:58:00.616 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:58:00.616 lat (usec) : 1000=0.02% 00:58:00.616 lat (msec) : 2=0.01%, 10=6.90%, 20=83.00%, 50=10.04%, 100=0.02% 00:58:00.616 cpu : usr=3.29%, sys=7.78%, ctx=352, majf=0, minf=2 00:58:00.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:58:00.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:00.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:00.616 issued rwts: total=4241,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:00.616 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:00.616 00:58:00.616 Run status group 0 (all jobs): 00:58:00.616 READ: bw=52.1MiB/s (54.7MB/s), 8143KiB/s-16.5MiB/s (8339kB/s-17.3MB/s), io=52.7MiB (55.3MB), run=1003-1011msec 00:58:00.616 WRITE: bw=56.9MiB/s (59.7MB/s), 9348KiB/s-17.9MiB/s (9572kB/s-18.8MB/s), io=57.6MiB (60.4MB), run=1003-1011msec 00:58:00.616 00:58:00.616 Disk stats (read/write): 00:58:00.616 nvme0n1: ios=3636/3847, merge=0/0, ticks=28792/27418, in_queue=56210, util=93.39% 00:58:00.616 nvme0n2: ios=2604/2765, merge=0/0, ticks=30127/25210, in_queue=55337, util=98.48% 00:58:00.616 nvme0n3: ios=1573/2048, merge=0/0, ticks=20948/34719, in_queue=55667, util=98.34% 00:58:00.616 nvme0n4: ios=3811/4096, merge=0/0, ticks=24466/27515, in_queue=51981, util=98.32% 00:58:00.616 05:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:58:00.616 [global] 00:58:00.616 thread=1 00:58:00.616 invalidate=1 00:58:00.616 rw=randwrite 00:58:00.616 time_based=1 00:58:00.616 runtime=1 00:58:00.616 ioengine=libaio 00:58:00.616 direct=1 00:58:00.616 bs=4096 00:58:00.617 iodepth=128 00:58:00.617 norandommap=0 00:58:00.617 numjobs=1 00:58:00.617 00:58:00.617 verify_dump=1 00:58:00.617 verify_backlog=512 00:58:00.617 verify_state_save=0 00:58:00.617 do_verify=1 00:58:00.617 verify=crc32c-intel 00:58:00.617 [job0] 00:58:00.617 filename=/dev/nvme0n1 00:58:00.617 [job1] 00:58:00.617 filename=/dev/nvme0n2 00:58:00.617 [job2] 00:58:00.617 filename=/dev/nvme0n3 00:58:00.617 [job3] 00:58:00.617 filename=/dev/nvme0n4 00:58:00.617 Could not set queue depth (nvme0n1) 00:58:00.617 Could not set queue depth (nvme0n2) 00:58:00.617 Could not set queue depth (nvme0n3) 00:58:00.617 Could not set queue depth (nvme0n4) 00:58:00.875 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:58:00.875 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:58:00.875 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:58:00.875 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:58:00.875 fio-3.35 00:58:00.875 Starting 4 threads 00:58:02.247 00:58:02.247 job0: (groupid=0, jobs=1): err= 0: pid=809334: Mon Dec 9 05:52:56 2024 00:58:02.247 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:58:02.247 slat (usec): min=2, max=11322, avg=102.86, stdev=766.04 00:58:02.247 clat (usec): min=5680, max=29465, avg=13177.52, stdev=3705.18 00:58:02.247 lat (usec): min=5702, max=30293, avg=13280.38, stdev=3776.57 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10421], 00:58:02.247 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12911], 00:58:02.247 | 70.00th=[13960], 80.00th=[16319], 90.00th=[18482], 95.00th=[20055], 00:58:02.247 | 99.00th=[25560], 99.50th=[26870], 99.90th=[29492], 99.95th=[29492], 00:58:02.247 | 99.99th=[29492] 00:58:02.247 write: IOPS=4751, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1008msec); 0 zone resets 00:58:02.247 slat (usec): min=4, max=10552, avg=101.23, stdev=643.13 00:58:02.247 clat (usec): min=1154, max=43529, avg=14016.93, stdev=6669.80 00:58:02.247 lat (usec): min=1165, max=43551, avg=14118.15, stdev=6719.96 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 5276], 5.00th=[ 7111], 10.00th=[ 8356], 20.00th=[10552], 00:58:02.247 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:58:02.247 | 70.00th=[12911], 80.00th=[19006], 90.00th=[21103], 95.00th=[28443], 00:58:02.247 | 99.00th=[39584], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:58:02.247 | 99.99th=[43779] 00:58:02.247 bw ( KiB/s): min=16944, max=20352, per=32.22%, avg=18648.00, stdev=2409.82, samples=2 00:58:02.247 iops : min= 4236, max= 5088, avg=4662.00, stdev=602.45, samples=2 00:58:02.247 lat (msec) : 2=0.02%, 4=0.24%, 10=15.49%, 20=74.07%, 50=10.17% 00:58:02.247 cpu : usr=5.56%, sys=8.14%, ctx=379, majf=0, minf=2 00:58:02.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:58:02.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:02.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:02.247 issued rwts: total=4608,4790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:02.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:02.247 job1: (groupid=0, jobs=1): err= 0: pid=809337: Mon Dec 9 05:52:56 2024 00:58:02.247 read: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1009msec) 00:58:02.247 slat (usec): min=2, max=14380, avg=114.61, stdev=857.34 00:58:02.247 clat (usec): min=2678, max=53384, avg=14339.49, stdev=7120.21 00:58:02.247 lat (usec): min=3777, max=53388, avg=14454.10, stdev=7193.69 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 6783], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10028], 00:58:02.247 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11863], 60.00th=[12911], 00:58:02.247 | 70.00th=[14091], 80.00th=[16909], 90.00th=[22938], 95.00th=[30278], 00:58:02.247 | 99.00th=[42206], 99.50th=[43779], 99.90th=[53216], 99.95th=[53216], 00:58:02.247 | 99.99th=[53216] 00:58:02.247 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:58:02.247 slat (usec): min=3, max=10497, avg=95.03, stdev=559.43 00:58:02.247 clat (usec): min=2742, max=56625, avg=13895.28, stdev=7442.64 00:58:02.247 lat (usec): min=2748, max=56631, avg=13990.32, stdev=7482.16 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 4883], 5.00th=[ 6849], 10.00th=[ 8455], 20.00th=[10290], 00:58:02.247 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:58:02.247 | 70.00th=[12649], 80.00th=[17695], 90.00th=[21103], 95.00th=[25822], 00:58:02.247 | 99.00th=[51643], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:58:02.247 | 99.99th=[56886] 00:58:02.247 bw ( KiB/s): min=16384, max=20480, per=31.85%, avg=18432.00, stdev=2896.31, samples=2 00:58:02.247 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:58:02.247 lat (msec) : 4=0.33%, 10=18.56%, 20=66.73%, 50=13.61%, 100=0.78% 00:58:02.247 cpu : usr=4.56%, sys=7.64%, ctx=420, majf=0, minf=1 00:58:02.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:58:02.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:02.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:02.247 issued rwts: total=4462,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:02.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:02.247 job2: (groupid=0, jobs=1): err= 0: pid=809338: Mon Dec 9 05:52:56 2024 00:58:02.247 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:58:02.247 slat (usec): min=2, max=13136, avg=154.88, stdev=1056.20 00:58:02.247 clat (usec): min=4759, max=38410, avg=20605.72, stdev=5506.86 00:58:02.247 lat (usec): min=4767, max=38430, avg=20760.60, stdev=5573.16 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 4883], 5.00th=[11863], 10.00th=[13960], 20.00th=[15401], 00:58:02.247 | 30.00th=[18482], 40.00th=[19530], 50.00th=[20317], 60.00th=[22152], 00:58:02.247 | 70.00th=[23725], 80.00th=[25035], 90.00th=[27657], 95.00th=[28967], 00:58:02.247 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:58:02.247 | 99.99th=[38536] 00:58:02.247 write: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec); 0 zone resets 00:58:02.247 slat (usec): min=3, max=17823, avg=146.23, stdev=1059.28 00:58:02.247 clat (usec): min=4414, max=48378, avg=19930.49, stdev=7414.57 00:58:02.247 lat (usec): min=7202, max=48383, avg=20076.72, stdev=7487.74 00:58:02.247 clat percentiles (usec): 00:58:02.247 | 1.00th=[ 8979], 5.00th=[11469], 10.00th=[13173], 20.00th=[14222], 00:58:02.247 | 30.00th=[14615], 40.00th=[15008], 50.00th=[17433], 60.00th=[19792], 00:58:02.247 | 70.00th=[24511], 80.00th=[26870], 90.00th=[29492], 95.00th=[32637], 00:58:02.247 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[45876], 00:58:02.247 | 99.99th=[48497] 00:58:02.247 bw ( KiB/s): min= 8584, max=16048, per=21.28%, avg=12316.00, stdev=5277.85, samples=2 00:58:02.247 iops : min= 2146, max= 4012, avg=3079.00, stdev=1319.46, samples=2 00:58:02.247 lat (msec) : 10=3.23%, 20=50.53%, 50=46.24% 00:58:02.247 cpu : usr=2.19%, sys=4.68%, ctx=184, majf=0, minf=1 00:58:02.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:58:02.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:02.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:02.247 issued rwts: total=3072,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:02.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:02.247 job3: (groupid=0, jobs=1): err= 0: pid=809339: Mon Dec 9 05:52:56 2024 00:58:02.247 read: IOPS=2387, BW=9551KiB/s (9780kB/s)(9.77MiB/1047msec) 00:58:02.247 slat (usec): min=2, max=15889, avg=211.34, stdev=1246.63 00:58:02.247 clat (usec): min=9759, max=74564, avg=27282.08, stdev=12523.39 00:58:02.248 lat (usec): min=9767, max=77872, avg=27493.41, stdev=12622.15 00:58:02.248 clat percentiles (usec): 00:58:02.248 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13042], 20.00th=[15401], 00:58:02.248 | 30.00th=[16188], 40.00th=[23462], 50.00th=[27657], 60.00th=[32113], 00:58:02.248 | 70.00th=[33424], 80.00th=[35914], 90.00th=[41157], 95.00th=[49546], 00:58:02.248 | 99.00th=[68682], 99.50th=[68682], 99.90th=[74974], 99.95th=[74974], 00:58:02.248 | 99.99th=[74974] 00:58:02.248 write: IOPS=2445, BW=9780KiB/s (10.0MB/s)(10.0MiB/1047msec); 0 zone resets 00:58:02.248 slat (usec): min=3, max=15771, avg=177.59, stdev=1154.81 00:58:02.248 clat (usec): min=9308, max=65434, avg=24967.56, stdev=10678.36 00:58:02.248 lat (usec): min=9316, max=65467, avg=25145.15, stdev=10766.35 00:58:02.248 clat percentiles (usec): 00:58:02.248 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12911], 20.00th=[15533], 00:58:02.248 | 30.00th=[16057], 40.00th=[22152], 50.00th=[24511], 60.00th=[26608], 00:58:02.248 | 70.00th=[27132], 80.00th=[32113], 90.00th=[37487], 95.00th=[47973], 00:58:02.248 | 99.00th=[58459], 99.50th=[58459], 99.90th=[60556], 99.95th=[62129], 00:58:02.248 | 99.99th=[65274] 00:58:02.248 bw ( KiB/s): min= 8192, max=12288, per=17.69%, avg=10240.00, stdev=2896.31, samples=2 00:58:02.248 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:58:02.248 lat (msec) : 10=0.32%, 20=37.08%, 50=58.89%, 100=3.72% 00:58:02.248 cpu : usr=2.68%, sys=4.11%, ctx=171, majf=0, minf=1 00:58:02.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:58:02.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:02.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:58:02.248 issued rwts: total=2500,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:02.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:58:02.248 00:58:02.248 Run status group 0 (all jobs): 00:58:02.248 READ: bw=54.6MiB/s (57.3MB/s), 9551KiB/s-17.9MiB/s (9780kB/s-18.7MB/s), io=57.2MiB (60.0MB), run=1005-1047msec 00:58:02.248 WRITE: bw=56.5MiB/s (59.3MB/s), 9780KiB/s-18.6MiB/s (10.0MB/s-19.5MB/s), io=59.2MiB (62.1MB), run=1005-1047msec 00:58:02.248 00:58:02.248 Disk stats (read/write): 00:58:02.248 nvme0n1: ios=3607/4079, merge=0/0, ticks=47104/57947, in_queue=105051, util=98.20% 00:58:02.248 nvme0n2: ios=3629/3654, merge=0/0, ticks=42492/39962, in_queue=82454, util=96.35% 00:58:02.248 nvme0n3: ios=2610/2960, merge=0/0, ticks=30862/28597, in_queue=59459, util=100.00% 00:58:02.248 nvme0n4: ios=2070/2332, merge=0/0, ticks=18117/16483, in_queue=34600, util=99.89% 00:58:02.248 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:58:02.248 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=809471 00:58:02.248 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:58:02.248 05:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:58:02.248 [global] 00:58:02.248 thread=1 00:58:02.248 invalidate=1 00:58:02.248 rw=read 00:58:02.248 time_based=1 00:58:02.248 runtime=10 00:58:02.248 ioengine=libaio 00:58:02.248 direct=1 00:58:02.248 bs=4096 00:58:02.248 iodepth=1 00:58:02.248 norandommap=1 00:58:02.248 numjobs=1 00:58:02.248 00:58:02.248 [job0] 00:58:02.248 filename=/dev/nvme0n1 00:58:02.248 [job1] 00:58:02.248 filename=/dev/nvme0n2 00:58:02.248 [job2] 00:58:02.248 filename=/dev/nvme0n3 00:58:02.248 [job3] 00:58:02.248 filename=/dev/nvme0n4 00:58:02.248 Could not set queue depth (nvme0n1) 00:58:02.248 Could not set queue depth (nvme0n2) 00:58:02.248 Could not set queue depth (nvme0n3) 00:58:02.248 Could not set queue depth (nvme0n4) 00:58:02.248 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:58:02.248 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:58:02.248 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:58:02.248 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:58:02.248 fio-3.35 00:58:02.248 Starting 4 threads 00:58:05.526 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:58:05.526 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:58:05.526 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27222016, buflen=4096 00:58:05.526 fio: pid=809572, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:58:05.526 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:05.526 05:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:58:05.526 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=32915456, buflen=4096 00:58:05.526 fio: pid=809571, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:58:05.783 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=356352, buflen=4096 00:58:05.783 fio: pid=809567, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:58:06.039 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:06.039 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:58:06.297 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:06.297 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:58:06.297 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=380928, buflen=4096 00:58:06.297 fio: pid=809570, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:58:06.297 00:58:06.297 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=809567: Mon Dec 9 05:53:00 2024 00:58:06.297 read: IOPS=25, BW=99.5KiB/s (102kB/s)(348KiB/3497msec) 00:58:06.297 slat (usec): min=6, max=16912, avg=321.72, stdev=2074.25 00:58:06.297 clat (usec): min=239, max=41928, avg=39583.76, stdev=7455.35 00:58:06.297 lat (usec): min=269, max=58019, avg=39908.79, stdev=7802.15 00:58:06.297 clat percentiles (usec): 00:58:06.297 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:58:06.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:58:06.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:58:06.297 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:58:06.297 | 99.99th=[41681] 00:58:06.297 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=100.00, stdev= 4.38, samples=6 00:58:06.297 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:58:06.297 lat (usec) : 250=1.14%, 500=2.27% 00:58:06.297 lat (msec) : 50=95.45% 00:58:06.297 cpu : usr=0.06%, sys=0.00%, ctx=90, majf=0, minf=1 00:58:06.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:58:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:06.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:58:06.297 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=809570: Mon Dec 9 05:53:00 2024 00:58:06.297 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(372KiB/3820msec) 00:58:06.297 slat (usec): min=8, max=9943, avg=281.54, stdev=1469.55 00:58:06.297 clat (usec): min=383, max=41493, avg=40544.25, stdev=4210.75 00:58:06.297 lat (usec): min=403, max=50997, avg=40828.45, stdev=4493.12 00:58:06.297 clat percentiles (usec): 00:58:06.297 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:58:06.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:58:06.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:58:06.297 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:58:06.297 | 99.99th=[41681] 00:58:06.297 bw ( KiB/s): min= 93, max= 104, per=0.62%, avg=97.86, stdev= 4.34, samples=7 00:58:06.297 iops : min= 23, max= 26, avg=24.43, stdev= 1.13, samples=7 00:58:06.297 lat (usec) : 500=1.06% 00:58:06.297 lat (msec) : 50=97.87% 00:58:06.297 cpu : usr=0.10%, sys=0.00%, ctx=97, majf=0, minf=2 00:58:06.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:58:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:06.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:58:06.297 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=809571: Mon Dec 9 05:53:00 2024 00:58:06.297 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(31.4MiB/3194msec) 00:58:06.297 slat (usec): min=4, max=7760, avg= 8.90, stdev=111.42 00:58:06.297 clat (usec): min=195, max=41140, avg=383.39, stdev=2304.41 00:58:06.297 lat (usec): min=200, max=41154, avg=392.29, stdev=2307.71 00:58:06.297 clat percentiles (usec): 00:58:06.297 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:58:06.297 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:58:06.297 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 314], 00:58:06.297 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[41157], 99.95th=[41157], 00:58:06.297 | 99.99th=[41157] 00:58:06.297 bw ( KiB/s): min= 176, max=14320, per=62.69%, avg=9756.00, stdev=5341.09, samples=6 00:58:06.297 iops : min= 44, max= 3580, avg=2439.00, stdev=1335.27, samples=6 00:58:06.297 lat (usec) : 250=59.89%, 500=39.46%, 750=0.31% 00:58:06.297 lat (msec) : 4=0.01%, 50=0.32% 00:58:06.297 cpu : usr=0.60%, sys=2.19%, ctx=8039, majf=0, minf=2 00:58:06.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:58:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 issued rwts: total=8037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:06.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:58:06.297 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=809572: Mon Dec 9 05:53:00 2024 00:58:06.297 read: IOPS=2277, BW=9110KiB/s (9329kB/s)(26.0MiB/2918msec) 00:58:06.297 slat (nsec): min=4112, max=67821, avg=9192.92, stdev=6811.36 00:58:06.297 clat (usec): min=211, max=41058, avg=423.50, stdev=2433.88 00:58:06.297 lat (usec): min=219, max=41091, avg=432.70, stdev=2434.63 00:58:06.297 clat percentiles (usec): 00:58:06.297 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:58:06.297 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 269], 00:58:06.297 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 379], 95.00th=[ 388], 00:58:06.297 | 99.00th=[ 412], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:58:06.297 | 99.99th=[41157] 00:58:06.297 bw ( KiB/s): min= 152, max=14368, per=55.06%, avg=8569.60, stdev=5574.68, samples=5 00:58:06.297 iops : min= 38, max= 3592, avg=2142.40, stdev=1393.67, samples=5 00:58:06.297 lat (usec) : 250=43.30%, 500=56.25%, 750=0.06%, 1000=0.02% 00:58:06.297 lat (msec) : 50=0.36% 00:58:06.297 cpu : usr=0.89%, sys=2.33%, ctx=6647, majf=0, minf=1 00:58:06.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:58:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:58:06.297 issued rwts: total=6647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:58:06.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:58:06.297 00:58:06.298 Run status group 0 (all jobs): 00:58:06.298 READ: bw=15.2MiB/s (15.9MB/s), 97.4KiB/s-9.83MiB/s (99.7kB/s-10.3MB/s), io=58.1MiB (60.9MB), run=2918-3820msec 00:58:06.298 00:58:06.298 Disk stats (read/write): 00:58:06.298 nvme0n1: ios=84/0, merge=0/0, ticks=3323/0, in_queue=3323, util=95.25% 00:58:06.298 nvme0n2: ios=88/0, merge=0/0, ticks=3568/0, in_queue=3568, util=96.20% 00:58:06.298 nvme0n3: ios=7760/0, merge=0/0, ticks=2969/0, in_queue=2969, util=96.38% 00:58:06.298 nvme0n4: ios=6586/0, merge=0/0, ticks=2731/0, in_queue=2731, util=96.75% 00:58:06.555 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:06.555 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:58:06.811 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:06.811 05:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:58:07.068 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:07.068 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:58:07.324 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:58:07.324 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:58:07.580 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:58:07.580 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 809471 00:58:07.580 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:58:07.580 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:58:07.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:58:07.836 nvmf hotplug test: fio failed as expected 00:58:07.836 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:08.097 rmmod nvme_tcp 00:58:08.097 rmmod nvme_fabrics 00:58:08.097 rmmod nvme_keyring 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 807467 ']' 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 807467 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 807467 ']' 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 807467 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 807467 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 807467' 00:58:08.097 killing process with pid 807467 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 807467 00:58:08.097 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 807467 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:08.355 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:10.894 00:58:10.894 real 0m23.665s 00:58:10.894 user 1m6.384s 00:58:10.894 sys 0m10.173s 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:58:10.894 ************************************ 00:58:10.894 END TEST nvmf_fio_target 00:58:10.894 ************************************ 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:58:10.894 ************************************ 00:58:10.894 START TEST nvmf_bdevio 00:58:10.894 ************************************ 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:58:10.894 * Looking for test storage... 00:58:10.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:58:10.894 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:58:10.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.895 --rc genhtml_branch_coverage=1 00:58:10.895 --rc genhtml_function_coverage=1 00:58:10.895 --rc genhtml_legend=1 00:58:10.895 --rc geninfo_all_blocks=1 00:58:10.895 --rc geninfo_unexecuted_blocks=1 00:58:10.895 00:58:10.895 ' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:58:10.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.895 --rc genhtml_branch_coverage=1 00:58:10.895 --rc genhtml_function_coverage=1 00:58:10.895 --rc genhtml_legend=1 00:58:10.895 --rc geninfo_all_blocks=1 00:58:10.895 --rc geninfo_unexecuted_blocks=1 00:58:10.895 00:58:10.895 ' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:58:10.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.895 --rc genhtml_branch_coverage=1 00:58:10.895 --rc genhtml_function_coverage=1 00:58:10.895 --rc genhtml_legend=1 00:58:10.895 --rc geninfo_all_blocks=1 00:58:10.895 --rc geninfo_unexecuted_blocks=1 00:58:10.895 00:58:10.895 ' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:58:10.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:10.895 --rc genhtml_branch_coverage=1 00:58:10.895 --rc genhtml_function_coverage=1 00:58:10.895 --rc genhtml_legend=1 00:58:10.895 --rc geninfo_all_blocks=1 00:58:10.895 --rc geninfo_unexecuted_blocks=1 00:58:10.895 00:58:10.895 ' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:10.895 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:58:10.896 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:58:12.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:58:12.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:12.795 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:58:12.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:58:12.796 Found net devices under 0000:0a:00.1: cvl_0_1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:12.796 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:12.796 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:12.796 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:12.796 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:12.796 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:12.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:12.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:58:12.796 00:58:12.796 --- 10.0.0.2 ping statistics --- 00:58:12.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:12.796 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:58:12.796 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:13.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:13.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:58:13.055 00:58:13.055 --- 10.0.0.1 ping statistics --- 00:58:13.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:13.055 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=812200 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 812200 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 812200 ']' 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:13.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:13.055 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.055 [2024-12-09 05:53:07.101406] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:58:13.055 [2024-12-09 05:53:07.102485] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:58:13.055 [2024-12-09 05:53:07.102539] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:13.055 [2024-12-09 05:53:07.173350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:58:13.055 [2024-12-09 05:53:07.232059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:13.055 [2024-12-09 05:53:07.232130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:13.055 [2024-12-09 05:53:07.232144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:13.055 [2024-12-09 05:53:07.232155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:13.055 [2024-12-09 05:53:07.232165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:13.055 [2024-12-09 05:53:07.233897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:58:13.055 [2024-12-09 05:53:07.233971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:58:13.055 [2024-12-09 05:53:07.234019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:58:13.055 [2024-12-09 05:53:07.234023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:58:13.313 [2024-12-09 05:53:07.327458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:58:13.313 [2024-12-09 05:53:07.327669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:58:13.313 [2024-12-09 05:53:07.327983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:58:13.313 [2024-12-09 05:53:07.328681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:58:13.313 [2024-12-09 05:53:07.328912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 [2024-12-09 05:53:07.382723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 Malloc0 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:13.313 [2024-12-09 05:53:07.446924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:58:13.313 { 00:58:13.313 "params": { 00:58:13.313 "name": "Nvme$subsystem", 00:58:13.313 "trtype": "$TEST_TRANSPORT", 00:58:13.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:58:13.313 "adrfam": "ipv4", 00:58:13.313 "trsvcid": "$NVMF_PORT", 00:58:13.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:58:13.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:58:13.313 "hdgst": ${hdgst:-false}, 00:58:13.313 "ddgst": ${ddgst:-false} 00:58:13.313 }, 00:58:13.313 "method": "bdev_nvme_attach_controller" 00:58:13.313 } 00:58:13.313 EOF 00:58:13.313 )") 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:58:13.313 05:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:58:13.313 "params": { 00:58:13.313 "name": "Nvme1", 00:58:13.313 "trtype": "tcp", 00:58:13.313 "traddr": "10.0.0.2", 00:58:13.313 "adrfam": "ipv4", 00:58:13.313 "trsvcid": "4420", 00:58:13.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:58:13.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:58:13.313 "hdgst": false, 00:58:13.313 "ddgst": false 00:58:13.313 }, 00:58:13.313 "method": "bdev_nvme_attach_controller" 00:58:13.313 }' 00:58:13.314 [2024-12-09 05:53:07.496351] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:58:13.314 [2024-12-09 05:53:07.496422] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812344 ] 00:58:13.572 [2024-12-09 05:53:07.564459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:58:13.572 [2024-12-09 05:53:07.628687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:13.572 [2024-12-09 05:53:07.628738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:13.572 [2024-12-09 05:53:07.628742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:13.830 I/O targets: 00:58:13.830 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:58:13.830 00:58:13.830 00:58:13.830 CUnit - A unit testing framework for C - Version 2.1-3 00:58:13.830 http://cunit.sourceforge.net/ 00:58:13.830 00:58:13.830 00:58:13.830 Suite: bdevio tests on: Nvme1n1 00:58:13.830 Test: blockdev write read block ...passed 00:58:13.830 Test: blockdev write zeroes read block ...passed 00:58:13.830 Test: blockdev write zeroes read no split ...passed 00:58:13.830 Test: blockdev write zeroes read split ...passed 00:58:13.830 Test: blockdev write zeroes read split partial ...passed 00:58:13.830 Test: blockdev reset ...[2024-12-09 05:53:08.038266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:58:13.830 [2024-12-09 05:53:08.038400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9acb0 (9): Bad file descriptor 00:58:14.088 [2024-12-09 05:53:08.173421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:58:14.088 passed 00:58:14.088 Test: blockdev write read 8 blocks ...passed 00:58:14.088 Test: blockdev write read size > 128k ...passed 00:58:14.088 Test: blockdev write read invalid size ...passed 00:58:14.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:58:14.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:58:14.088 Test: blockdev write read max offset ...passed 00:58:14.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:58:14.346 Test: blockdev writev readv 8 blocks ...passed 00:58:14.346 Test: blockdev writev readv 30 x 1block ...passed 00:58:14.346 Test: blockdev writev readv block ...passed 00:58:14.346 Test: blockdev writev readv size > 128k ...passed 00:58:14.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:58:14.346 Test: blockdev comparev and writev ...[2024-12-09 05:53:08.387851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.387924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.387942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.388341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.388367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.388389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.388405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.388767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.388791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.388826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.388844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.389229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.389254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.389283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:58:14.346 [2024-12-09 05:53:08.389301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:58:14.346 passed 00:58:14.346 Test: blockdev nvme passthru rw ...passed 00:58:14.346 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:53:08.473544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:14.346 [2024-12-09 05:53:08.473574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.473731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:14.346 [2024-12-09 05:53:08.473755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.473902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:14.346 [2024-12-09 05:53:08.473925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:58:14.346 [2024-12-09 05:53:08.474074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:58:14.346 [2024-12-09 05:53:08.474097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:58:14.346 passed 00:58:14.346 Test: blockdev nvme admin passthru ...passed 00:58:14.346 Test: blockdev copy ...passed 00:58:14.346 00:58:14.346 Run Summary: Type Total Ran Passed Failed Inactive 00:58:14.346 suites 1 1 n/a 0 0 00:58:14.346 tests 23 23 23 0 0 00:58:14.346 asserts 152 152 152 0 n/a 00:58:14.346 00:58:14.346 Elapsed time = 1.444 seconds 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:14.604 rmmod nvme_tcp 00:58:14.604 rmmod nvme_fabrics 00:58:14.604 rmmod nvme_keyring 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 812200 ']' 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 812200 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 812200 ']' 00:58:14.604 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 812200 00:58:14.605 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:58:14.605 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:14.605 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 812200 00:58:14.862 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:58:14.863 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:58:14.863 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 812200' 00:58:14.863 killing process with pid 812200 00:58:14.863 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 812200 00:58:14.863 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 812200 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:15.121 05:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:17.026 00:58:17.026 real 0m6.551s 00:58:17.026 user 0m9.099s 00:58:17.026 sys 0m2.601s 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:58:17.026 ************************************ 00:58:17.026 END TEST nvmf_bdevio 00:58:17.026 ************************************ 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:58:17.026 00:58:17.026 real 3m56.600s 00:58:17.026 user 8m56.569s 00:58:17.026 sys 1m24.749s 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:17.026 05:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:58:17.026 ************************************ 00:58:17.026 END TEST nvmf_target_core_interrupt_mode 00:58:17.026 ************************************ 00:58:17.026 05:53:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:58:17.026 05:53:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:58:17.026 05:53:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:17.026 05:53:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:17.026 ************************************ 00:58:17.026 START TEST nvmf_interrupt 00:58:17.026 ************************************ 00:58:17.026 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:58:17.285 * Looking for test storage... 00:58:17.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:58:17.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:17.285 --rc genhtml_branch_coverage=1 00:58:17.285 --rc genhtml_function_coverage=1 00:58:17.285 --rc genhtml_legend=1 00:58:17.285 --rc geninfo_all_blocks=1 00:58:17.285 --rc geninfo_unexecuted_blocks=1 00:58:17.285 00:58:17.285 ' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:58:17.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:17.285 --rc genhtml_branch_coverage=1 00:58:17.285 --rc genhtml_function_coverage=1 00:58:17.285 --rc genhtml_legend=1 00:58:17.285 --rc geninfo_all_blocks=1 00:58:17.285 --rc geninfo_unexecuted_blocks=1 00:58:17.285 00:58:17.285 ' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:58:17.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:17.285 --rc genhtml_branch_coverage=1 00:58:17.285 --rc genhtml_function_coverage=1 00:58:17.285 --rc genhtml_legend=1 00:58:17.285 --rc geninfo_all_blocks=1 00:58:17.285 --rc geninfo_unexecuted_blocks=1 00:58:17.285 00:58:17.285 ' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:58:17.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:17.285 --rc genhtml_branch_coverage=1 00:58:17.285 --rc genhtml_function_coverage=1 00:58:17.285 --rc genhtml_legend=1 00:58:17.285 --rc geninfo_all_blocks=1 00:58:17.285 --rc geninfo_unexecuted_blocks=1 00:58:17.285 00:58:17.285 ' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:17.285 05:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:58:17.286 05:53:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:19.819 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:58:19.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:58:19.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:58:19.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:58:19.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:19.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:19.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:58:19.820 00:58:19.820 --- 10.0.0.2 ping statistics --- 00:58:19.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:19.820 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:19.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:19.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:58:19.820 00:58:19.820 --- 10.0.0.1 ping statistics --- 00:58:19.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:19.820 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=814434 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 814434 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 814434 ']' 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:19.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:19.820 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.820 [2024-12-09 05:53:13.627090] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:58:19.820 [2024-12-09 05:53:13.628171] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:58:19.820 [2024-12-09 05:53:13.628236] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:19.820 [2024-12-09 05:53:13.698732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:58:19.820 [2024-12-09 05:53:13.754489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:19.820 [2024-12-09 05:53:13.754547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:19.820 [2024-12-09 05:53:13.754577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:19.820 [2024-12-09 05:53:13.754589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:19.820 [2024-12-09 05:53:13.754599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:19.820 [2024-12-09 05:53:13.756056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:19.820 [2024-12-09 05:53:13.756062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:19.821 [2024-12-09 05:53:13.840941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:58:19.821 [2024-12-09 05:53:13.840969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:58:19.821 [2024-12-09 05:53:13.841227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:58:19.821 5000+0 records in 00:58:19.821 5000+0 records out 00:58:19.821 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0145834 s, 702 MB/s 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 AIO0 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 [2024-12-09 05:53:13.952683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:19.821 [2024-12-09 05:53:13.976965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 814434 0 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 0 idle 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:19.821 05:53:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814434 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0' 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814434 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 814434 1 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 1 idle 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:20.080 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814438 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814438 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=814587 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:58:20.337 05:53:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 814434 0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 814434 0 busy 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814434 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.26 reactor_0' 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814434 root 20 0 128.2g 48384 35328 S 0.0 0.1 0:00.26 reactor_0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:58:20.338 05:53:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814434 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:02.45 reactor_0' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814434 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:02.45 reactor_0 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 814434 1 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 814434 1 busy 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814438 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:01.25 reactor_1' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814438 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:01.25 reactor_1 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:21.710 05:53:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 814587 00:58:31.699 Initializing NVMe Controllers 00:58:31.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:58:31.699 Controller IO queue size 256, less than required. 00:58:31.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:58:31.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:58:31.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:58:31.700 Initialization complete. Launching workers. 00:58:31.700 ======================================================== 00:58:31.700 Latency(us) 00:58:31.700 Device Information : IOPS MiB/s Average min max 00:58:31.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13001.00 50.79 19705.15 4050.69 24169.44 00:58:31.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13375.90 52.25 19151.82 4353.13 23427.14 00:58:31.700 ======================================================== 00:58:31.700 Total : 26376.90 103.03 19424.55 4050.69 24169.44 00:58:31.700 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 814434 0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 0 idle 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814434 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.20 reactor_0' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814434 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.20 reactor_0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 814434 1 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 1 idle 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814438 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814438 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:31.700 05:53:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:58:31.700 05:53:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:58:31.700 05:53:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:58:31.700 05:53:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:58:31.700 05:53:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:58:31.700 05:53:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 814434 0 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 0 idle 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:58:33.077 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814434 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.30 reactor_0' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814434 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.30 reactor_0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 814434 1 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 814434 1 idle 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=814434 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 814434 -w 256 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 814438 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 814438 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:58:33.336 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:58:33.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:33.594 rmmod nvme_tcp 00:58:33.594 rmmod nvme_fabrics 00:58:33.594 rmmod nvme_keyring 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 814434 ']' 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 814434 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 814434 ']' 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 814434 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814434 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814434' 00:58:33.594 killing process with pid 814434 00:58:33.594 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 814434 00:58:33.595 05:53:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 814434 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:34.159 05:53:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:36.060 05:53:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:58:36.060 00:58:36.060 real 0m18.901s 00:58:36.060 user 0m37.698s 00:58:36.060 sys 0m6.305s 00:58:36.060 05:53:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:36.060 05:53:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:58:36.060 ************************************ 00:58:36.060 END TEST nvmf_interrupt 00:58:36.060 ************************************ 00:58:36.060 00:58:36.060 real 25m0.173s 00:58:36.060 user 58m27.864s 00:58:36.060 sys 6m42.844s 00:58:36.060 05:53:30 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:36.060 05:53:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:36.060 ************************************ 00:58:36.060 END TEST nvmf_tcp 00:58:36.060 ************************************ 00:58:36.060 05:53:30 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:58:36.060 05:53:30 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:58:36.060 05:53:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:36.060 05:53:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:36.060 05:53:30 -- common/autotest_common.sh@10 -- # set +x 00:58:36.060 ************************************ 00:58:36.060 START TEST spdkcli_nvmf_tcp 00:58:36.060 ************************************ 00:58:36.060 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:58:36.060 * Looking for test storage... 00:58:36.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:58:36.060 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:58:36.060 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:58:36.060 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:58:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.318 --rc genhtml_branch_coverage=1 00:58:36.318 --rc genhtml_function_coverage=1 00:58:36.318 --rc genhtml_legend=1 00:58:36.318 --rc geninfo_all_blocks=1 00:58:36.318 --rc geninfo_unexecuted_blocks=1 00:58:36.318 00:58:36.318 ' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:58:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.318 --rc genhtml_branch_coverage=1 00:58:36.318 --rc genhtml_function_coverage=1 00:58:36.318 --rc genhtml_legend=1 00:58:36.318 --rc geninfo_all_blocks=1 00:58:36.318 --rc geninfo_unexecuted_blocks=1 00:58:36.318 00:58:36.318 ' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:58:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.318 --rc genhtml_branch_coverage=1 00:58:36.318 --rc genhtml_function_coverage=1 00:58:36.318 --rc genhtml_legend=1 00:58:36.318 --rc geninfo_all_blocks=1 00:58:36.318 --rc geninfo_unexecuted_blocks=1 00:58:36.318 00:58:36.318 ' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:58:36.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.318 --rc genhtml_branch_coverage=1 00:58:36.318 --rc genhtml_function_coverage=1 00:58:36.318 --rc genhtml_legend=1 00:58:36.318 --rc geninfo_all_blocks=1 00:58:36.318 --rc geninfo_unexecuted_blocks=1 00:58:36.318 00:58:36.318 ' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:36.318 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:36.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=816598 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 816598 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 816598 ']' 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:36.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:36.319 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:36.319 [2024-12-09 05:53:30.442871] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:58:36.319 [2024-12-09 05:53:30.442977] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816598 ] 00:58:36.319 [2024-12-09 05:53:30.510693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:58:36.577 [2024-12-09 05:53:30.571028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:36.577 [2024-12-09 05:53:30.571033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:36.577 05:53:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:58:36.577 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:58:36.577 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:58:36.577 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:58:36.577 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:58:36.577 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:58:36.577 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:58:36.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:58:36.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:58:36.577 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:58:36.577 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:58:36.577 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:58:36.577 ' 00:58:39.100 [2024-12-09 05:53:33.317679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:40.468 [2024-12-09 05:53:34.590016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:58:43.003 [2024-12-09 05:53:36.937124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:58:45.093 [2024-12-09 05:53:38.955437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:58:46.463 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:58:46.463 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:58:46.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:58:46.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:58:46.463 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:58:46.463 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:58:46.463 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:58:46.463 05:53:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:47.027 05:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:58:47.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:58:47.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:58:47.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:58:47.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:58:47.027 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:58:47.027 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:58:47.028 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:58:47.028 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:58:47.028 ' 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:58:52.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:58:52.285 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:58:52.285 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:58:52.285 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 816598 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 816598 ']' 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 816598 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816598 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816598' 00:58:52.542 killing process with pid 816598 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 816598 00:58:52.542 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 816598 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 816598 ']' 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 816598 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 816598 ']' 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 816598 00:58:52.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (816598) - No such process 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 816598 is not found' 00:58:52.800 Process with pid 816598 is not found 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:58:52.800 00:58:52.800 real 0m16.674s 00:58:52.800 user 0m35.518s 00:58:52.800 sys 0m0.760s 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:52.800 05:53:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:52.800 ************************************ 00:58:52.800 END TEST spdkcli_nvmf_tcp 00:58:52.800 ************************************ 00:58:52.800 05:53:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:58:52.800 05:53:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:52.800 05:53:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:52.800 05:53:46 -- common/autotest_common.sh@10 -- # set +x 00:58:52.800 ************************************ 00:58:52.800 START TEST nvmf_identify_passthru 00:58:52.800 ************************************ 00:58:52.800 05:53:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:58:52.800 * Looking for test storage... 00:58:52.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:58:52.800 05:53:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:58:52.800 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:58:52.800 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:58:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:53.059 --rc genhtml_branch_coverage=1 00:58:53.059 --rc genhtml_function_coverage=1 00:58:53.059 --rc genhtml_legend=1 00:58:53.059 --rc geninfo_all_blocks=1 00:58:53.059 --rc geninfo_unexecuted_blocks=1 00:58:53.059 00:58:53.059 ' 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:58:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:53.059 --rc genhtml_branch_coverage=1 00:58:53.059 --rc genhtml_function_coverage=1 00:58:53.059 --rc genhtml_legend=1 00:58:53.059 --rc geninfo_all_blocks=1 00:58:53.059 --rc geninfo_unexecuted_blocks=1 00:58:53.059 00:58:53.059 ' 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:58:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:53.059 --rc genhtml_branch_coverage=1 00:58:53.059 --rc genhtml_function_coverage=1 00:58:53.059 --rc genhtml_legend=1 00:58:53.059 --rc geninfo_all_blocks=1 00:58:53.059 --rc geninfo_unexecuted_blocks=1 00:58:53.059 00:58:53.059 ' 00:58:53.059 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:58:53.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:53.059 --rc genhtml_branch_coverage=1 00:58:53.059 --rc genhtml_function_coverage=1 00:58:53.059 --rc genhtml_legend=1 00:58:53.059 --rc geninfo_all_blocks=1 00:58:53.059 --rc geninfo_unexecuted_blocks=1 00:58:53.059 00:58:53.059 ' 00:58:53.059 05:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:53.059 05:53:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:53.059 05:53:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.059 05:53:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.059 05:53:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.059 05:53:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:58:53.059 05:53:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:53.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:53.059 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:53.060 05:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:53.060 05:53:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:58:53.060 05:53:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:53.060 05:53:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:53.060 05:53:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:53.060 05:53:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.060 05:53:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.060 05:53:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.060 05:53:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:58:53.060 05:53:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:53.060 05:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:53.060 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:58:53.060 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:58:53.060 05:53:47 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:58:53.060 05:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:58:55.586 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:55.586 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:58:55.586 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:58:55.586 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:58:55.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:58:55.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:58:55.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:58:55.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:58:55.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:55.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:58:55.587 00:58:55.587 --- 10.0.0.2 ping statistics --- 00:58:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:55.587 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:55.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:55.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:58:55.587 00:58:55.587 --- 10.0.0.1 ping statistics --- 00:58:55.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:55.587 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:55.587 05:53:49 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:55.587 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:58:55.587 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:58:55.587 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:58:55.588 05:53:49 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:58:55.588 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:58:55.588 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:58:55.588 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:58:55.588 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:58:55.588 05:53:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:58:59.766 05:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:58:59.766 05:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:58:59.766 05:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:58:59.766 05:53:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=821240 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:59:03.948 05:53:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 821240 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 821240 ']' 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:03.948 05:53:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:03.948 [2024-12-09 05:53:57.996772] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:59:03.948 [2024-12-09 05:53:57.996854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:03.948 [2024-12-09 05:53:58.072105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:03.948 [2024-12-09 05:53:58.133612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:03.948 [2024-12-09 05:53:58.133678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:03.948 [2024-12-09 05:53:58.133693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:03.948 [2024-12-09 05:53:58.133704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:03.948 [2024-12-09 05:53:58.133714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:03.948 [2024-12-09 05:53:58.135324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:03.948 [2024-12-09 05:53:58.135353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:59:03.948 [2024-12-09 05:53:58.135414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:59:03.948 [2024-12-09 05:53:58.135418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:59:04.207 05:53:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:04.207 INFO: Log level set to 20 00:59:04.207 INFO: Requests: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "method": "nvmf_set_config", 00:59:04.207 "id": 1, 00:59:04.207 "params": { 00:59:04.207 "admin_cmd_passthru": { 00:59:04.207 "identify_ctrlr": true 00:59:04.207 } 00:59:04.207 } 00:59:04.207 } 00:59:04.207 00:59:04.207 INFO: response: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "id": 1, 00:59:04.207 "result": true 00:59:04.207 } 00:59:04.207 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:04.207 05:53:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:04.207 INFO: Setting log level to 20 00:59:04.207 INFO: Setting log level to 20 00:59:04.207 INFO: Log level set to 20 00:59:04.207 INFO: Log level set to 20 00:59:04.207 INFO: Requests: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "method": "framework_start_init", 00:59:04.207 "id": 1 00:59:04.207 } 00:59:04.207 00:59:04.207 INFO: Requests: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "method": "framework_start_init", 00:59:04.207 "id": 1 00:59:04.207 } 00:59:04.207 00:59:04.207 [2024-12-09 05:53:58.356580] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:59:04.207 INFO: response: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "id": 1, 00:59:04.207 "result": true 00:59:04.207 } 00:59:04.207 00:59:04.207 INFO: response: 00:59:04.207 { 00:59:04.207 "jsonrpc": "2.0", 00:59:04.207 "id": 1, 00:59:04.207 "result": true 00:59:04.207 } 00:59:04.207 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:04.207 05:53:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:04.207 INFO: Setting log level to 40 00:59:04.207 INFO: Setting log level to 40 00:59:04.207 INFO: Setting log level to 40 00:59:04.207 [2024-12-09 05:53:58.366742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:04.207 05:53:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:04.207 05:53:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.207 05:53:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.486 Nvme0n1 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.486 [2024-12-09 05:54:01.269732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.486 [ 00:59:07.486 { 00:59:07.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:59:07.486 "subtype": "Discovery", 00:59:07.486 "listen_addresses": [], 00:59:07.486 "allow_any_host": true, 00:59:07.486 "hosts": [] 00:59:07.486 }, 00:59:07.486 { 00:59:07.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:59:07.486 "subtype": "NVMe", 00:59:07.486 "listen_addresses": [ 00:59:07.486 { 00:59:07.486 "trtype": "TCP", 00:59:07.486 "adrfam": "IPv4", 00:59:07.486 "traddr": "10.0.0.2", 00:59:07.486 "trsvcid": "4420" 00:59:07.486 } 00:59:07.486 ], 00:59:07.486 "allow_any_host": true, 00:59:07.486 "hosts": [], 00:59:07.486 "serial_number": "SPDK00000000000001", 00:59:07.486 "model_number": "SPDK bdev Controller", 00:59:07.486 "max_namespaces": 1, 00:59:07.486 "min_cntlid": 1, 00:59:07.486 "max_cntlid": 65519, 00:59:07.486 "namespaces": [ 00:59:07.486 { 00:59:07.486 "nsid": 1, 00:59:07.486 "bdev_name": "Nvme0n1", 00:59:07.486 "name": "Nvme0n1", 00:59:07.486 "nguid": "BFDFC77611EF460AB40EF425F76E8A3C", 00:59:07.486 "uuid": "bfdfc776-11ef-460a-b40e-f425f76e8a3c" 00:59:07.486 } 00:59:07.486 ] 00:59:07.486 } 00:59:07.486 ] 00:59:07.486 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:59:07.486 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:59:07.744 05:54:01 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:59:07.744 rmmod nvme_tcp 00:59:07.744 rmmod nvme_fabrics 00:59:07.744 rmmod nvme_keyring 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 821240 ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 821240 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 821240 ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 821240 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821240 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821240' 00:59:07.744 killing process with pid 821240 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 821240 00:59:07.744 05:54:01 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 821240 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:59:09.678 05:54:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:09.678 05:54:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:59:09.678 05:54:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:11.583 05:54:05 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:59:11.583 00:59:11.583 real 0m18.568s 00:59:11.583 user 0m27.079s 00:59:11.583 sys 0m3.240s 00:59:11.583 05:54:05 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:11.583 05:54:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:11.583 ************************************ 00:59:11.583 END TEST nvmf_identify_passthru 00:59:11.583 ************************************ 00:59:11.583 05:54:05 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:59:11.583 05:54:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:11.583 05:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:11.583 05:54:05 -- common/autotest_common.sh@10 -- # set +x 00:59:11.583 ************************************ 00:59:11.583 START TEST nvmf_dif 00:59:11.583 ************************************ 00:59:11.583 05:54:05 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:59:11.583 * Looking for test storage... 00:59:11.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:59:11.583 05:54:05 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:59:11.583 05:54:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:59:11.583 05:54:05 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:59:11.583 05:54:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:59:11.583 05:54:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:59:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:11.584 --rc genhtml_branch_coverage=1 00:59:11.584 --rc genhtml_function_coverage=1 00:59:11.584 --rc genhtml_legend=1 00:59:11.584 --rc geninfo_all_blocks=1 00:59:11.584 --rc geninfo_unexecuted_blocks=1 00:59:11.584 00:59:11.584 ' 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:59:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:11.584 --rc genhtml_branch_coverage=1 00:59:11.584 --rc genhtml_function_coverage=1 00:59:11.584 --rc genhtml_legend=1 00:59:11.584 --rc geninfo_all_blocks=1 00:59:11.584 --rc geninfo_unexecuted_blocks=1 00:59:11.584 00:59:11.584 ' 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:59:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:11.584 --rc genhtml_branch_coverage=1 00:59:11.584 --rc genhtml_function_coverage=1 00:59:11.584 --rc genhtml_legend=1 00:59:11.584 --rc geninfo_all_blocks=1 00:59:11.584 --rc geninfo_unexecuted_blocks=1 00:59:11.584 00:59:11.584 ' 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:59:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:59:11.584 --rc genhtml_branch_coverage=1 00:59:11.584 --rc genhtml_function_coverage=1 00:59:11.584 --rc genhtml_legend=1 00:59:11.584 --rc geninfo_all_blocks=1 00:59:11.584 --rc geninfo_unexecuted_blocks=1 00:59:11.584 00:59:11.584 ' 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:11.584 05:54:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:11.584 05:54:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:11.584 05:54:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:11.584 05:54:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:11.584 05:54:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:59:11.584 05:54:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:59:11.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:59:11.584 05:54:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:59:11.584 05:54:05 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:59:11.584 05:54:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:59:14.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:59:14.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:14.113 05:54:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:59:14.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:59:14.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:59:14.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:14.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:59:14.114 00:59:14.114 --- 10.0.0.2 ping statistics --- 00:59:14.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:14.114 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:59:14.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:14.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:59:14.114 00:59:14.114 --- 10.0.0.1 ping statistics --- 00:59:14.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:14.114 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:59:14.114 05:54:07 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:59:15.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:59:15.047 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:59:15.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:59:15.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:59:15.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:59:15.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:59:15.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:59:15.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:59:15.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:59:15.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:59:15.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:59:15.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:59:15.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:59:15.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:59:15.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:59:15.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:59:15.047 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:59:15.047 05:54:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:59:15.047 05:54:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=825130 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:59:15.047 05:54:09 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 825130 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 825130 ']' 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:15.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:15.047 05:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:15.047 [2024-12-09 05:54:09.257636] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 00:59:15.047 [2024-12-09 05:54:09.257711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:15.306 [2024-12-09 05:54:09.327442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:15.306 [2024-12-09 05:54:09.383000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:15.306 [2024-12-09 05:54:09.383059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:15.306 [2024-12-09 05:54:09.383082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:15.306 [2024-12-09 05:54:09.383092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:15.306 [2024-12-09 05:54:09.383101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:15.306 [2024-12-09 05:54:09.383682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:59:15.306 05:54:09 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:15.306 05:54:09 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:15.306 05:54:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:59:15.306 05:54:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:15.306 [2024-12-09 05:54:09.522767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.306 05:54:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:15.306 05:54:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:15.564 ************************************ 00:59:15.564 START TEST fio_dif_1_default 00:59:15.564 ************************************ 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:15.564 bdev_null0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:15.564 [2024-12-09 05:54:09.579040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:15.564 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:15.564 { 00:59:15.564 "params": { 00:59:15.564 "name": "Nvme$subsystem", 00:59:15.564 "trtype": "$TEST_TRANSPORT", 00:59:15.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:15.565 "adrfam": "ipv4", 00:59:15.565 "trsvcid": "$NVMF_PORT", 00:59:15.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:15.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:15.565 "hdgst": ${hdgst:-false}, 00:59:15.565 "ddgst": ${ddgst:-false} 00:59:15.565 }, 00:59:15.565 "method": "bdev_nvme_attach_controller" 00:59:15.565 } 00:59:15.565 EOF 00:59:15.565 )") 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:59:15.565 "params": { 00:59:15.565 "name": "Nvme0", 00:59:15.565 "trtype": "tcp", 00:59:15.565 "traddr": "10.0.0.2", 00:59:15.565 "adrfam": "ipv4", 00:59:15.565 "trsvcid": "4420", 00:59:15.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:59:15.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:59:15.565 "hdgst": false, 00:59:15.565 "ddgst": false 00:59:15.565 }, 00:59:15.565 "method": "bdev_nvme_attach_controller" 00:59:15.565 }' 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:59:15.565 05:54:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:15.823 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:59:15.823 fio-3.35 00:59:15.823 Starting 1 thread 00:59:28.029 00:59:28.029 filename0: (groupid=0, jobs=1): err= 0: pid=825363: Mon Dec 9 05:54:20 2024 00:59:28.029 read: IOPS=205, BW=824KiB/s (843kB/s)(8256KiB/10023msec) 00:59:28.029 slat (nsec): min=6980, max=65354, avg=8762.98, stdev=2904.63 00:59:28.029 clat (usec): min=545, max=44142, avg=19395.55, stdev=20275.09 00:59:28.029 lat (usec): min=552, max=44170, avg=19404.31, stdev=20274.98 00:59:28.029 clat percentiles (usec): 00:59:28.029 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 603], 20.00th=[ 619], 00:59:28.029 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[41157], 00:59:28.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:59:28.029 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:59:28.029 | 99.99th=[44303] 00:59:28.029 bw ( KiB/s): min= 736, max= 896, per=100.00%, avg=824.00, stdev=60.43, samples=20 00:59:28.029 iops : min= 184, max= 224, avg=206.00, stdev=15.11, samples=20 00:59:28.029 lat (usec) : 750=53.63%, 1000=0.24% 00:59:28.029 lat (msec) : 50=46.12% 00:59:28.029 cpu : usr=90.50%, sys=9.20%, ctx=19, majf=0, minf=276 00:59:28.029 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:28.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:28.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:28.029 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:28.029 latency : target=0, window=0, percentile=100.00%, depth=4 00:59:28.029 00:59:28.029 Run status group 0 (all jobs): 00:59:28.029 READ: bw=824KiB/s (843kB/s), 824KiB/s-824KiB/s (843kB/s-843kB/s), io=8256KiB (8454kB), run=10023-10023msec 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 00:59:28.029 real 0m11.215s 00:59:28.029 user 0m10.169s 00:59:28.029 sys 0m1.240s 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 ************************************ 00:59:28.029 END TEST fio_dif_1_default 00:59:28.029 ************************************ 00:59:28.029 05:54:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:59:28.029 05:54:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:28.029 05:54:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 ************************************ 00:59:28.029 START TEST fio_dif_1_multi_subsystems 00:59:28.029 ************************************ 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 bdev_null0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.029 [2024-12-09 05:54:20.849482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:59:28.029 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.030 bdev_null1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:28.030 { 00:59:28.030 "params": { 00:59:28.030 "name": "Nvme$subsystem", 00:59:28.030 "trtype": "$TEST_TRANSPORT", 00:59:28.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:28.030 "adrfam": "ipv4", 00:59:28.030 "trsvcid": "$NVMF_PORT", 00:59:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:28.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:28.030 "hdgst": ${hdgst:-false}, 00:59:28.030 "ddgst": ${ddgst:-false} 00:59:28.030 }, 00:59:28.030 "method": "bdev_nvme_attach_controller" 00:59:28.030 } 00:59:28.030 EOF 00:59:28.030 )") 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:28.030 { 00:59:28.030 "params": { 00:59:28.030 "name": "Nvme$subsystem", 00:59:28.030 "trtype": "$TEST_TRANSPORT", 00:59:28.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:28.030 "adrfam": "ipv4", 00:59:28.030 "trsvcid": "$NVMF_PORT", 00:59:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:28.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:28.030 "hdgst": ${hdgst:-false}, 00:59:28.030 "ddgst": ${ddgst:-false} 00:59:28.030 }, 00:59:28.030 "method": "bdev_nvme_attach_controller" 00:59:28.030 } 00:59:28.030 EOF 00:59:28.030 )") 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:59:28.030 "params": { 00:59:28.030 "name": "Nvme0", 00:59:28.030 "trtype": "tcp", 00:59:28.030 "traddr": "10.0.0.2", 00:59:28.030 "adrfam": "ipv4", 00:59:28.030 "trsvcid": "4420", 00:59:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:59:28.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:59:28.030 "hdgst": false, 00:59:28.030 "ddgst": false 00:59:28.030 }, 00:59:28.030 "method": "bdev_nvme_attach_controller" 00:59:28.030 },{ 00:59:28.030 "params": { 00:59:28.030 "name": "Nvme1", 00:59:28.030 "trtype": "tcp", 00:59:28.030 "traddr": "10.0.0.2", 00:59:28.030 "adrfam": "ipv4", 00:59:28.030 "trsvcid": "4420", 00:59:28.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:59:28.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:59:28.030 "hdgst": false, 00:59:28.030 "ddgst": false 00:59:28.030 }, 00:59:28.030 "method": "bdev_nvme_attach_controller" 00:59:28.030 }' 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:28.030 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:28.031 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:28.031 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:59:28.031 05:54:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:28.031 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:59:28.031 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:59:28.031 fio-3.35 00:59:28.031 Starting 2 threads 00:59:37.995 00:59:37.995 filename0: (groupid=0, jobs=1): err= 0: pid=826768: Mon Dec 9 05:54:31 2024 00:59:37.995 read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10007msec) 00:59:37.995 slat (nsec): min=7183, max=71324, avg=9109.83, stdev=3304.79 00:59:37.995 clat (usec): min=557, max=45165, avg=20861.54, stdev=20342.38 00:59:37.995 lat (usec): min=564, max=45191, avg=20870.65, stdev=20342.07 00:59:37.995 clat percentiles (usec): 00:59:37.995 | 1.00th=[ 586], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 627], 00:59:37.995 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 906], 60.00th=[41157], 00:59:37.995 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:59:37.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:59:37.995 | 99.99th=[45351] 00:59:37.995 bw ( KiB/s): min= 704, max= 832, per=49.42%, avg=764.80, stdev=32.67, samples=20 00:59:37.995 iops : min= 176, max= 208, avg=191.20, stdev= 8.17, samples=20 00:59:37.995 lat (usec) : 750=46.66%, 1000=3.44% 00:59:37.995 lat (msec) : 2=0.21%, 50=49.69% 00:59:37.995 cpu : usr=94.63%, sys=5.05%, ctx=15, majf=0, minf=175 00:59:37.995 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:37.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:37.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:37.995 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:37.995 latency : target=0, window=0, percentile=100.00%, depth=4 00:59:37.995 filename1: (groupid=0, jobs=1): err= 0: pid=826769: Mon Dec 9 05:54:31 2024 00:59:37.995 read: IOPS=195, BW=781KiB/s (800kB/s)(7824KiB/10019msec) 00:59:37.995 slat (nsec): min=7217, max=37553, avg=9537.20, stdev=3826.37 00:59:37.995 clat (usec): min=550, max=45160, avg=20457.44, stdev=20329.23 00:59:37.995 lat (usec): min=558, max=45174, avg=20466.97, stdev=20329.09 00:59:37.995 clat percentiles (usec): 00:59:37.995 | 1.00th=[ 578], 5.00th=[ 611], 10.00th=[ 619], 20.00th=[ 644], 00:59:37.995 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 783], 60.00th=[41157], 00:59:37.995 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:59:37.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:59:37.995 | 99.99th=[45351] 00:59:37.995 bw ( KiB/s): min= 704, max= 896, per=50.46%, avg=780.80, stdev=44.53, samples=20 00:59:37.995 iops : min= 176, max= 224, avg=195.20, stdev=11.13, samples=20 00:59:37.995 lat (usec) : 750=47.09%, 1000=4.04% 00:59:37.995 lat (msec) : 2=0.20%, 50=48.67% 00:59:37.995 cpu : usr=95.35%, sys=4.33%, ctx=19, majf=0, minf=107 00:59:37.995 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:37.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:37.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:37.995 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:37.995 latency : target=0, window=0, percentile=100.00%, depth=4 00:59:37.995 00:59:37.995 Run status group 0 (all jobs): 00:59:37.995 READ: bw=1546KiB/s (1583kB/s), 766KiB/s-781KiB/s (784kB/s-800kB/s), io=15.1MiB (15.9MB), run=10007-10019msec 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 00:59:38.254 real 0m11.438s 00:59:38.254 user 0m20.440s 00:59:38.254 sys 0m1.247s 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 ************************************ 00:59:38.254 END TEST fio_dif_1_multi_subsystems 00:59:38.254 ************************************ 00:59:38.254 05:54:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:59:38.254 05:54:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:59:38.254 05:54:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 ************************************ 00:59:38.254 START TEST fio_dif_rand_params 00:59:38.254 ************************************ 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 bdev_null0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:38.254 [2024-12-09 05:54:32.341373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:38.254 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:38.254 { 00:59:38.254 "params": { 00:59:38.255 "name": "Nvme$subsystem", 00:59:38.255 "trtype": "$TEST_TRANSPORT", 00:59:38.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:38.255 "adrfam": "ipv4", 00:59:38.255 "trsvcid": "$NVMF_PORT", 00:59:38.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:38.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:38.255 "hdgst": ${hdgst:-false}, 00:59:38.255 "ddgst": ${ddgst:-false} 00:59:38.255 }, 00:59:38.255 "method": "bdev_nvme_attach_controller" 00:59:38.255 } 00:59:38.255 EOF 00:59:38.255 )") 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:59:38.255 "params": { 00:59:38.255 "name": "Nvme0", 00:59:38.255 "trtype": "tcp", 00:59:38.255 "traddr": "10.0.0.2", 00:59:38.255 "adrfam": "ipv4", 00:59:38.255 "trsvcid": "4420", 00:59:38.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:59:38.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:59:38.255 "hdgst": false, 00:59:38.255 "ddgst": false 00:59:38.255 }, 00:59:38.255 "method": "bdev_nvme_attach_controller" 00:59:38.255 }' 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:59:38.255 05:54:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:38.513 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:59:38.513 ... 00:59:38.513 fio-3.35 00:59:38.513 Starting 3 threads 00:59:45.078 00:59:45.078 filename0: (groupid=0, jobs=1): err= 0: pid=828169: Mon Dec 9 05:54:38 2024 00:59:45.078 read: IOPS=239, BW=30.0MiB/s (31.4MB/s)(151MiB/5043msec) 00:59:45.078 slat (usec): min=7, max=118, avg=13.61, stdev= 4.40 00:59:45.078 clat (usec): min=5375, max=53434, avg=12462.39, stdev=4710.16 00:59:45.078 lat (usec): min=5400, max=53447, avg=12476.00, stdev=4710.42 00:59:45.078 clat percentiles (usec): 00:59:45.078 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:59:45.078 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:59:45.078 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14222], 95.00th=[15795], 00:59:45.078 | 99.00th=[46924], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:59:45.078 | 99.99th=[53216] 00:59:45.078 bw ( KiB/s): min=28672, max=32768, per=36.11%, avg=30899.20, stdev=1453.42, samples=10 00:59:45.078 iops : min= 224, max= 256, avg=241.40, stdev=11.35, samples=10 00:59:45.078 lat (msec) : 10=9.51%, 20=89.08%, 50=0.91%, 100=0.50% 00:59:45.078 cpu : usr=93.08%, sys=6.43%, ctx=14, majf=0, minf=135 00:59:45.078 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:45.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 issued rwts: total=1209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:45.078 latency : target=0, window=0, percentile=100.00%, depth=3 00:59:45.078 filename0: (groupid=0, jobs=1): err= 0: pid=828170: Mon Dec 9 05:54:38 2024 00:59:45.078 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5045msec) 00:59:45.078 slat (usec): min=7, max=116, avg=13.49, stdev= 4.20 00:59:45.078 clat (usec): min=5604, max=53719, avg=13918.72, stdev=4688.67 00:59:45.078 lat (usec): min=5612, max=53732, avg=13932.21, stdev=4688.89 00:59:45.078 clat percentiles (usec): 00:59:45.078 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11600], 00:59:45.078 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13304], 60.00th=[14222], 00:59:45.078 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16909], 95.00th=[17433], 00:59:45.078 | 99.00th=[47449], 99.50th=[49546], 99.90th=[53216], 99.95th=[53740], 00:59:45.078 | 99.99th=[53740] 00:59:45.078 bw ( KiB/s): min=23808, max=30208, per=32.34%, avg=27673.60, stdev=1816.02, samples=10 00:59:45.078 iops : min= 186, max= 236, avg=216.20, stdev=14.19, samples=10 00:59:45.078 lat (msec) : 10=6.46%, 20=92.24%, 50=0.92%, 100=0.37% 00:59:45.078 cpu : usr=93.20%, sys=6.30%, ctx=14, majf=0, minf=114 00:59:45.078 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:45.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:45.078 latency : target=0, window=0, percentile=100.00%, depth=3 00:59:45.078 filename0: (groupid=0, jobs=1): err= 0: pid=828171: Mon Dec 9 05:54:38 2024 00:59:45.078 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(135MiB/5007msec) 00:59:45.078 slat (nsec): min=7487, max=44743, avg=13140.44, stdev=2286.97 00:59:45.078 clat (usec): min=4980, max=54192, avg=13875.22, stdev=5401.93 00:59:45.078 lat (usec): min=4988, max=54205, avg=13888.36, stdev=5402.02 00:59:45.078 clat percentiles (usec): 00:59:45.078 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[10421], 20.00th=[11469], 00:59:45.078 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13173], 60.00th=[14353], 00:59:45.078 | 70.00th=[15008], 80.00th=[15664], 90.00th=[16581], 95.00th=[17171], 00:59:45.078 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[54264], 00:59:45.078 | 99.99th=[54264] 00:59:45.078 bw ( KiB/s): min=21248, max=31488, per=32.25%, avg=27596.80, stdev=2724.79, samples=10 00:59:45.078 iops : min= 166, max= 246, avg=215.60, stdev=21.29, samples=10 00:59:45.078 lat (msec) : 10=7.49%, 20=90.84%, 50=0.56%, 100=1.11% 00:59:45.078 cpu : usr=92.95%, sys=6.55%, ctx=5, majf=0, minf=76 00:59:45.078 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:59:45.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:45.078 issued rwts: total=1081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:45.078 latency : target=0, window=0, percentile=100.00%, depth=3 00:59:45.078 00:59:45.078 Run status group 0 (all jobs): 00:59:45.078 READ: bw=83.6MiB/s (87.6MB/s), 26.8MiB/s-30.0MiB/s (28.1MB/s-31.4MB/s), io=422MiB (442MB), run=5007-5045msec 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.078 bdev_null0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.078 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 [2024-12-09 05:54:38.592994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 bdev_null1 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 bdev_null2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:45.079 { 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme$subsystem", 00:59:45.079 "trtype": "$TEST_TRANSPORT", 00:59:45.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "$NVMF_PORT", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:45.079 "hdgst": ${hdgst:-false}, 00:59:45.079 "ddgst": ${ddgst:-false} 00:59:45.079 }, 00:59:45.079 "method": "bdev_nvme_attach_controller" 00:59:45.079 } 00:59:45.079 EOF 00:59:45.079 )") 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:45.079 { 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme$subsystem", 00:59:45.079 "trtype": "$TEST_TRANSPORT", 00:59:45.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "$NVMF_PORT", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:45.079 "hdgst": ${hdgst:-false}, 00:59:45.079 "ddgst": ${ddgst:-false} 00:59:45.079 }, 00:59:45.079 "method": "bdev_nvme_attach_controller" 00:59:45.079 } 00:59:45.079 EOF 00:59:45.079 )") 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:45.079 { 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme$subsystem", 00:59:45.079 "trtype": "$TEST_TRANSPORT", 00:59:45.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "$NVMF_PORT", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:45.079 "hdgst": ${hdgst:-false}, 00:59:45.079 "ddgst": ${ddgst:-false} 00:59:45.079 }, 00:59:45.079 "method": "bdev_nvme_attach_controller" 00:59:45.079 } 00:59:45.079 EOF 00:59:45.079 )") 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:59:45.079 05:54:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme0", 00:59:45.079 "trtype": "tcp", 00:59:45.079 "traddr": "10.0.0.2", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "4420", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:59:45.079 "hdgst": false, 00:59:45.079 "ddgst": false 00:59:45.079 }, 00:59:45.079 "method": "bdev_nvme_attach_controller" 00:59:45.079 },{ 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme1", 00:59:45.079 "trtype": "tcp", 00:59:45.079 "traddr": "10.0.0.2", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "4420", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:59:45.079 "hdgst": false, 00:59:45.079 "ddgst": false 00:59:45.079 }, 00:59:45.079 "method": "bdev_nvme_attach_controller" 00:59:45.079 },{ 00:59:45.079 "params": { 00:59:45.079 "name": "Nvme2", 00:59:45.079 "trtype": "tcp", 00:59:45.079 "traddr": "10.0.0.2", 00:59:45.079 "adrfam": "ipv4", 00:59:45.079 "trsvcid": "4420", 00:59:45.079 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:59:45.079 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:59:45.080 "hdgst": false, 00:59:45.080 "ddgst": false 00:59:45.080 }, 00:59:45.080 "method": "bdev_nvme_attach_controller" 00:59:45.080 }' 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:59:45.080 05:54:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:45.080 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:59:45.080 ... 00:59:45.080 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:59:45.080 ... 00:59:45.080 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:59:45.080 ... 00:59:45.080 fio-3.35 00:59:45.080 Starting 24 threads 00:59:57.315 00:59:57.315 filename0: (groupid=0, jobs=1): err= 0: pid=829034: Mon Dec 9 05:54:50 2024 00:59:57.315 read: IOPS=90, BW=363KiB/s (372kB/s)(3680KiB/10143msec) 00:59:57.315 slat (nsec): min=7530, max=63201, avg=11784.97, stdev=6636.41 00:59:57.315 clat (msec): min=77, max=277, avg=174.98, stdev=38.07 00:59:57.315 lat (msec): min=77, max=277, avg=175.00, stdev=38.07 00:59:57.315 clat percentiles (msec): 00:59:57.315 | 1.00th=[ 78], 5.00th=[ 101], 10.00th=[ 127], 20.00th=[ 155], 00:59:57.315 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:59:57.315 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 226], 95.00th=[ 236], 00:59:57.315 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:59:57.315 | 99.99th=[ 279] 00:59:57.315 bw ( KiB/s): min= 256, max= 512, per=5.51%, avg=361.60, stdev=54.79, samples=20 00:59:57.315 iops : min= 64, max= 128, avg=90.40, stdev=13.70, samples=20 00:59:57.315 lat (msec) : 100=3.26%, 250=93.26%, 500=3.48% 00:59:57.315 cpu : usr=97.78%, sys=1.86%, ctx=16, majf=0, minf=9 00:59:57.315 IO depths : 1=0.3%, 2=1.4%, 4=9.0%, 8=76.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:59:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.315 complete : 0=0.0%, 4=89.5%, 8=5.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.315 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.315 filename0: (groupid=0, jobs=1): err= 0: pid=829035: Mon Dec 9 05:54:50 2024 00:59:57.315 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10125msec) 00:59:57.315 slat (nsec): min=6114, max=87365, avg=28109.95, stdev=11477.28 00:59:57.315 clat (msec): min=159, max=505, avg=266.19, stdev=53.14 00:59:57.315 lat (msec): min=159, max=505, avg=266.21, stdev=53.13 00:59:57.315 clat percentiles (msec): 00:59:57.315 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 224], 20.00th=[ 245], 00:59:57.315 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 264], 00:59:57.315 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 330], 00:59:57.315 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:59:57.315 | 99.99th=[ 506] 00:59:57.315 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=249.26, stdev=49.84, samples=19 00:59:57.315 iops : min= 32, max= 96, avg=62.32, stdev=12.46, samples=19 00:59:57.315 lat (msec) : 250=36.51%, 500=60.86%, 750=2.63% 00:59:57.315 cpu : usr=97.39%, sys=1.68%, ctx=277, majf=0, minf=9 00:59:57.315 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:59:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.315 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829036: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=64, BW=258KiB/s (264kB/s)(2616KiB/10135msec) 00:59:57.316 slat (usec): min=8, max=128, avg=58.69, stdev=24.97 00:59:57.316 clat (msec): min=168, max=366, avg=247.11, stdev=30.37 00:59:57.316 lat (msec): min=168, max=366, avg=247.16, stdev=30.39 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 215], 20.00th=[ 226], 00:59:57.316 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:59:57.316 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 279], 00:59:57.316 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 368], 99.95th=[ 368], 00:59:57.316 | 99.99th=[ 368] 00:59:57.316 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=255.20, stdev=56.98, samples=20 00:59:57.316 iops : min= 32, max= 96, avg=63.80, stdev=14.24, samples=20 00:59:57.316 lat (msec) : 250=47.40%, 500=52.60% 00:59:57.316 cpu : usr=97.82%, sys=1.52%, ctx=80, majf=0, minf=9 00:59:57.316 IO depths : 1=4.3%, 2=10.6%, 4=25.1%, 8=52.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829037: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10038msec) 00:59:57.316 slat (usec): min=4, max=114, avg=67.73, stdev=14.32 00:59:57.316 clat (msec): min=217, max=332, avg=263.54, stdev=26.92 00:59:57.316 lat (msec): min=217, max=332, avg=263.61, stdev=26.92 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 218], 5.00th=[ 226], 10.00th=[ 230], 20.00th=[ 243], 00:59:57.316 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:59:57.316 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 326], 00:59:57.316 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:59:57.316 | 99.99th=[ 334] 00:59:57.316 bw ( KiB/s): min= 128, max= 384, per=3.60%, avg=236.80, stdev=62.64, samples=20 00:59:57.316 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:59:57.316 lat (msec) : 250=31.58%, 500=68.42% 00:59:57.316 cpu : usr=97.74%, sys=1.57%, ctx=242, majf=0, minf=9 00:59:57.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829038: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=70, BW=280KiB/s (287kB/s)(2840KiB/10142msec) 00:59:57.316 slat (usec): min=9, max=112, avg=56.04, stdev=20.23 00:59:57.316 clat (msec): min=99, max=423, avg=226.46, stdev=64.31 00:59:57.316 lat (msec): min=99, max=424, avg=226.51, stdev=64.32 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 101], 5.00th=[ 118], 10.00th=[ 142], 20.00th=[ 159], 00:59:57.316 | 30.00th=[ 171], 40.00th=[ 218], 50.00th=[ 245], 60.00th=[ 255], 00:59:57.316 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 305], 95.00th=[ 313], 00:59:57.316 | 99.00th=[ 384], 99.50th=[ 397], 99.90th=[ 426], 99.95th=[ 426], 00:59:57.316 | 99.99th=[ 426] 00:59:57.316 bw ( KiB/s): min= 128, max= 432, per=4.23%, avg=277.60, stdev=79.45, samples=20 00:59:57.316 iops : min= 32, max= 108, avg=69.40, stdev=19.86, samples=20 00:59:57.316 lat (msec) : 100=1.55%, 250=51.41%, 500=47.04% 00:59:57.316 cpu : usr=97.90%, sys=1.63%, ctx=14, majf=0, minf=9 00:59:57.316 IO depths : 1=3.4%, 2=8.3%, 4=21.0%, 8=58.2%, 16=9.2%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829039: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10136msec) 00:59:57.316 slat (nsec): min=7952, max=98587, avg=62052.40, stdev=16430.64 00:59:57.316 clat (msec): min=160, max=384, avg=259.35, stdev=36.11 00:59:57.316 lat (msec): min=160, max=384, avg=259.41, stdev=36.11 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 224], 20.00th=[ 245], 00:59:57.316 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 264], 00:59:57.316 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 317], 00:59:57.316 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:59:57.316 | 99.99th=[ 384] 00:59:57.316 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=243.20, stdev=55.57, samples=20 00:59:57.316 iops : min= 32, max= 96, avg=60.80, stdev=13.89, samples=20 00:59:57.316 lat (msec) : 250=36.22%, 500=63.78% 00:59:57.316 cpu : usr=98.26%, sys=1.28%, ctx=15, majf=0, minf=9 00:59:57.316 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829040: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=91, BW=366KiB/s (375kB/s)(3712KiB/10143msec) 00:59:57.316 slat (nsec): min=7575, max=93667, avg=15576.53, stdev=11107.55 00:59:57.316 clat (msec): min=100, max=270, avg=174.19, stdev=26.64 00:59:57.316 lat (msec): min=100, max=270, avg=174.21, stdev=26.63 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 102], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 153], 00:59:57.316 | 30.00th=[ 157], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 178], 00:59:57.316 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 207], 95.00th=[ 218], 00:59:57.316 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:59:57.316 | 99.99th=[ 271] 00:59:57.316 bw ( KiB/s): min= 256, max= 512, per=5.55%, avg=364.80, stdev=61.11, samples=20 00:59:57.316 iops : min= 64, max= 128, avg=91.20, stdev=15.28, samples=20 00:59:57.316 lat (msec) : 250=99.14%, 500=0.86% 00:59:57.316 cpu : usr=97.86%, sys=1.72%, ctx=19, majf=0, minf=11 00:59:57.316 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename0: (groupid=0, jobs=1): err= 0: pid=829041: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10124msec) 00:59:57.316 slat (nsec): min=10352, max=69114, avg=25116.12, stdev=7768.35 00:59:57.316 clat (msec): min=163, max=412, avg=266.07, stdev=41.27 00:59:57.316 lat (msec): min=163, max=412, avg=266.09, stdev=41.27 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 169], 5.00th=[ 218], 10.00th=[ 226], 20.00th=[ 243], 00:59:57.316 | 30.00th=[ 245], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 268], 00:59:57.316 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 342], 00:59:57.316 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 414], 99.95th=[ 414], 00:59:57.316 | 99.99th=[ 414] 00:59:57.316 bw ( KiB/s): min= 128, max= 256, per=3.60%, avg=236.80, stdev=46.89, samples=20 00:59:57.316 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:59:57.316 lat (msec) : 250=33.55%, 500=66.45% 00:59:57.316 cpu : usr=97.97%, sys=1.49%, ctx=45, majf=0, minf=9 00:59:57.316 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename1: (groupid=0, jobs=1): err= 0: pid=829042: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10118msec) 00:59:57.316 slat (nsec): min=8339, max=55647, avg=23964.42, stdev=6137.90 00:59:57.316 clat (msec): min=163, max=405, avg=264.11, stdev=43.72 00:59:57.316 lat (msec): min=163, max=405, avg=264.13, stdev=43.72 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 224], 20.00th=[ 241], 00:59:57.316 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 268], 00:59:57.316 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 317], 95.00th=[ 342], 00:59:57.316 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:59:57.316 | 99.99th=[ 405] 00:59:57.316 bw ( KiB/s): min= 128, max= 256, per=3.60%, avg=236.80, stdev=44.84, samples=20 00:59:57.316 iops : min= 32, max= 64, avg=59.20, stdev=11.21, samples=20 00:59:57.316 lat (msec) : 250=35.86%, 500=64.14% 00:59:57.316 cpu : usr=98.18%, sys=1.39%, ctx=14, majf=0, minf=9 00:59:57.316 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:59:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.316 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.316 filename1: (groupid=0, jobs=1): err= 0: pid=829043: Mon Dec 9 05:54:50 2024 00:59:57.316 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10139msec) 00:59:57.316 slat (nsec): min=8172, max=60660, avg=25159.77, stdev=8019.59 00:59:57.316 clat (msec): min=159, max=405, avg=259.77, stdev=36.94 00:59:57.316 lat (msec): min=159, max=405, avg=259.80, stdev=36.94 00:59:57.316 clat percentiles (msec): 00:59:57.316 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 224], 20.00th=[ 245], 00:59:57.316 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 264], 00:59:57.316 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 321], 00:59:57.316 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 405], 00:59:57.316 | 99.99th=[ 405] 00:59:57.317 bw ( KiB/s): min= 127, max= 384, per=3.71%, avg=243.15, stdev=57.35, samples=20 00:59:57.317 iops : min= 31, max= 96, avg=60.75, stdev=14.42, samples=20 00:59:57.317 lat (msec) : 250=35.26%, 500=64.74% 00:59:57.317 cpu : usr=97.98%, sys=1.67%, ctx=24, majf=0, minf=9 00:59:57.317 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829044: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10124msec) 00:59:57.317 slat (nsec): min=14081, max=88764, avg=27351.20, stdev=9735.85 00:59:57.317 clat (msec): min=164, max=505, avg=266.18, stdev=50.95 00:59:57.317 lat (msec): min=164, max=505, avg=266.21, stdev=50.95 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 226], 20.00th=[ 245], 00:59:57.317 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 268], 00:59:57.317 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 321], 00:59:57.317 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:59:57.317 | 99.99th=[ 506] 00:59:57.317 bw ( KiB/s): min= 128, max= 384, per=3.80%, avg=249.26, stdev=50.12, samples=19 00:59:57.317 iops : min= 32, max= 96, avg=62.32, stdev=12.53, samples=19 00:59:57.317 lat (msec) : 250=34.21%, 500=63.16%, 750=2.63% 00:59:57.317 cpu : usr=98.11%, sys=1.33%, ctx=16, majf=0, minf=9 00:59:57.317 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829045: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=75, BW=301KiB/s (309kB/s)(3056KiB/10143msec) 00:59:57.317 slat (usec): min=8, max=100, avg=63.19, stdev=14.11 00:59:57.317 clat (msec): min=86, max=346, avg=211.07, stdev=47.90 00:59:57.317 lat (msec): min=86, max=346, avg=211.13, stdev=47.90 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 87], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 174], 00:59:57.317 | 30.00th=[ 178], 40.00th=[ 197], 50.00th=[ 215], 60.00th=[ 234], 00:59:57.317 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 266], 95.00th=[ 275], 00:59:57.317 | 99.00th=[ 296], 99.50th=[ 326], 99.90th=[ 347], 99.95th=[ 347], 00:59:57.317 | 99.99th=[ 347] 00:59:57.317 bw ( KiB/s): min= 256, max= 384, per=4.56%, avg=299.20, stdev=58.75, samples=20 00:59:57.317 iops : min= 64, max= 96, avg=74.80, stdev=14.69, samples=20 00:59:57.317 lat (msec) : 100=3.80%, 250=69.76%, 500=26.44% 00:59:57.317 cpu : usr=98.14%, sys=1.41%, ctx=27, majf=0, minf=9 00:59:57.317 IO depths : 1=1.3%, 2=6.0%, 4=20.3%, 8=61.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829046: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=90, BW=362KiB/s (371kB/s)(3672KiB/10143msec) 00:59:57.317 slat (nsec): min=8150, max=40876, avg=10703.18, stdev=4000.80 00:59:57.317 clat (msec): min=92, max=323, avg=175.34, stdev=31.98 00:59:57.317 lat (msec): min=92, max=323, avg=175.35, stdev=31.98 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 101], 5.00th=[ 128], 10.00th=[ 140], 20.00th=[ 157], 00:59:57.317 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 176], 00:59:57.317 | 70.00th=[ 182], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 230], 00:59:57.317 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 326], 00:59:57.317 | 99.99th=[ 326] 00:59:57.317 bw ( KiB/s): min= 224, max= 512, per=5.49%, avg=360.80, stdev=55.54, samples=20 00:59:57.317 iops : min= 56, max= 128, avg=90.20, stdev=13.89, samples=20 00:59:57.317 lat (msec) : 100=0.22%, 250=96.95%, 500=2.83% 00:59:57.317 cpu : usr=98.06%, sys=1.52%, ctx=24, majf=0, minf=9 00:59:57.317 IO depths : 1=0.8%, 2=2.4%, 4=10.8%, 8=74.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=90.0%, 8=4.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829047: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10128msec) 00:59:57.317 slat (usec): min=9, max=105, avg=60.15, stdev=20.24 00:59:57.317 clat (msec): min=141, max=439, avg=264.10, stdev=45.57 00:59:57.317 lat (msec): min=141, max=439, avg=264.16, stdev=45.56 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 144], 5.00th=[ 180], 10.00th=[ 226], 20.00th=[ 239], 00:59:57.317 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 268], 00:59:57.317 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 330], 95.00th=[ 338], 00:59:57.317 | 99.00th=[ 393], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 439], 00:59:57.317 | 99.99th=[ 439] 00:59:57.317 bw ( KiB/s): min= 128, max= 368, per=3.60%, avg=236.80, stdev=57.71, samples=20 00:59:57.317 iops : min= 32, max= 92, avg=59.20, stdev=14.43, samples=20 00:59:57.317 lat (msec) : 250=33.55%, 500=66.45% 00:59:57.317 cpu : usr=98.22%, sys=1.30%, ctx=28, majf=0, minf=9 00:59:57.317 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829048: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10117msec) 00:59:57.317 slat (usec): min=12, max=109, avg=62.69, stdev=14.99 00:59:57.317 clat (msec): min=223, max=404, avg=265.65, stdev=31.91 00:59:57.317 lat (msec): min=223, max=404, avg=265.71, stdev=31.91 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 224], 5.00th=[ 226], 10.00th=[ 234], 20.00th=[ 243], 00:59:57.317 | 30.00th=[ 247], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 268], 00:59:57.317 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 317], 00:59:57.317 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:59:57.317 | 99.99th=[ 405] 00:59:57.317 bw ( KiB/s): min= 128, max= 256, per=3.60%, avg=236.80, stdev=46.89, samples=20 00:59:57.317 iops : min= 32, max= 64, avg=59.20, stdev=11.72, samples=20 00:59:57.317 lat (msec) : 250=31.58%, 500=68.42% 00:59:57.317 cpu : usr=97.86%, sys=1.51%, ctx=110, majf=0, minf=9 00:59:57.317 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename1: (groupid=0, jobs=1): err= 0: pid=829049: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10148msec) 00:59:57.317 slat (usec): min=8, max=119, avg=59.76, stdev=21.65 00:59:57.317 clat (msec): min=99, max=411, avg=241.02, stdev=56.22 00:59:57.317 lat (msec): min=99, max=411, avg=241.08, stdev=56.23 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 101], 5.00th=[ 142], 10.00th=[ 165], 20.00th=[ 174], 00:59:57.317 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 264], 00:59:57.317 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 305], 95.00th=[ 317], 00:59:57.317 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 414], 00:59:57.317 | 99.99th=[ 414] 00:59:57.317 bw ( KiB/s): min= 128, max= 384, per=4.00%, avg=262.40, stdev=63.87, samples=20 00:59:57.317 iops : min= 32, max= 96, avg=65.60, stdev=15.97, samples=20 00:59:57.317 lat (msec) : 100=1.49%, 250=45.83%, 500=52.68% 00:59:57.317 cpu : usr=97.66%, sys=1.73%, ctx=56, majf=0, minf=9 00:59:57.317 IO depths : 1=3.0%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=829050: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10047msec) 00:59:57.317 slat (usec): min=8, max=102, avg=66.36, stdev=14.82 00:59:57.317 clat (msec): min=145, max=438, avg=263.82, stdev=33.96 00:59:57.317 lat (msec): min=145, max=438, avg=263.88, stdev=33.96 00:59:57.317 clat percentiles (msec): 00:59:57.317 | 1.00th=[ 180], 5.00th=[ 224], 10.00th=[ 228], 20.00th=[ 241], 00:59:57.317 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:59:57.317 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 330], 00:59:57.317 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:59:57.317 | 99.99th=[ 439] 00:59:57.317 bw ( KiB/s): min= 128, max= 384, per=3.60%, avg=236.80, stdev=59.78, samples=20 00:59:57.317 iops : min= 32, max= 96, avg=59.20, stdev=14.94, samples=20 00:59:57.317 lat (msec) : 250=32.57%, 500=67.43% 00:59:57.317 cpu : usr=97.95%, sys=1.35%, ctx=95, majf=0, minf=9 00:59:57.317 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:59:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.317 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=829051: Mon Dec 9 05:54:50 2024 00:59:57.317 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10145msec) 00:59:57.318 slat (nsec): min=8744, max=68068, avg=30549.30, stdev=9195.87 00:59:57.318 clat (msec): min=172, max=405, avg=259.82, stdev=32.73 00:59:57.318 lat (msec): min=172, max=406, avg=259.85, stdev=32.73 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 226], 20.00th=[ 245], 00:59:57.318 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:59:57.318 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 300], 95.00th=[ 317], 00:59:57.318 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 405], 99.95th=[ 405], 00:59:57.318 | 99.99th=[ 405] 00:59:57.318 bw ( KiB/s): min= 128, max= 384, per=3.71%, avg=243.20, stdev=57.24, samples=20 00:59:57.318 iops : min= 32, max= 96, avg=60.80, stdev=14.31, samples=20 00:59:57.318 lat (msec) : 250=33.97%, 500=66.03% 00:59:57.318 cpu : usr=97.57%, sys=1.71%, ctx=84, majf=0, minf=9 00:59:57.318 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829052: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=90, BW=361KiB/s (370kB/s)(3664KiB/10142msec) 00:59:57.318 slat (nsec): min=8171, max=82522, avg=15287.28, stdev=12002.16 00:59:57.318 clat (msec): min=100, max=325, avg=176.50, stdev=30.17 00:59:57.318 lat (msec): min=100, max=325, avg=176.51, stdev=30.16 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 101], 5.00th=[ 142], 10.00th=[ 142], 20.00th=[ 157], 00:59:57.318 | 30.00th=[ 165], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 176], 00:59:57.318 | 70.00th=[ 180], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 228], 00:59:57.318 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 326], 99.95th=[ 326], 00:59:57.318 | 99.99th=[ 326] 00:59:57.318 bw ( KiB/s): min= 256, max= 432, per=5.49%, avg=360.00, stdev=45.40, samples=20 00:59:57.318 iops : min= 64, max= 108, avg=90.00, stdev=11.35, samples=20 00:59:57.318 lat (msec) : 250=97.82%, 500=2.18% 00:59:57.318 cpu : usr=97.86%, sys=1.50%, ctx=84, majf=0, minf=9 00:59:57.318 IO depths : 1=0.4%, 2=2.7%, 4=12.9%, 8=71.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829053: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10125msec) 00:59:57.318 slat (usec): min=8, max=102, avg=44.95, stdev=23.02 00:59:57.318 clat (msec): min=141, max=405, avg=259.77, stdev=42.78 00:59:57.318 lat (msec): min=141, max=405, avg=259.82, stdev=42.78 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 146], 5.00th=[ 176], 10.00th=[ 218], 20.00th=[ 234], 00:59:57.318 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 257], 60.00th=[ 266], 00:59:57.318 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 330], 00:59:57.318 | 99.00th=[ 355], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 405], 00:59:57.318 | 99.99th=[ 405] 00:59:57.318 bw ( KiB/s): min= 128, max= 384, per=3.69%, avg=242.40, stdev=55.49, samples=20 00:59:57.318 iops : min= 32, max= 96, avg=60.60, stdev=13.87, samples=20 00:59:57.318 lat (msec) : 250=37.30%, 500=62.70% 00:59:57.318 cpu : usr=98.02%, sys=1.37%, ctx=107, majf=0, minf=10 00:59:57.318 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829054: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10143msec) 00:59:57.318 slat (usec): min=8, max=102, avg=60.39, stdev=16.33 00:59:57.318 clat (msec): min=100, max=417, avg=246.91, stdev=52.62 00:59:57.318 lat (msec): min=100, max=417, avg=246.97, stdev=52.63 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 102], 5.00th=[ 142], 10.00th=[ 169], 20.00th=[ 226], 00:59:57.318 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 257], 60.00th=[ 264], 00:59:57.318 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 317], 00:59:57.318 | 99.00th=[ 351], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:59:57.318 | 99.99th=[ 418] 00:59:57.318 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=256.00, stdev=57.10, samples=20 00:59:57.318 iops : min= 32, max= 96, avg=64.00, stdev=14.28, samples=20 00:59:57.318 lat (msec) : 250=43.90%, 500=56.10% 00:59:57.318 cpu : usr=97.94%, sys=1.47%, ctx=73, majf=0, minf=9 00:59:57.318 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829055: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10125msec) 00:59:57.318 slat (usec): min=14, max=102, avg=65.65, stdev=12.18 00:59:57.318 clat (msec): min=172, max=504, avg=265.80, stdev=49.04 00:59:57.318 lat (msec): min=172, max=504, avg=265.86, stdev=49.03 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 174], 5.00th=[ 215], 10.00th=[ 228], 20.00th=[ 245], 00:59:57.318 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 268], 00:59:57.318 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 317], 00:59:57.318 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:59:57.318 | 99.99th=[ 506] 00:59:57.318 bw ( KiB/s): min= 128, max= 368, per=3.80%, avg=249.26, stdev=49.84, samples=19 00:59:57.318 iops : min= 32, max= 92, avg=62.32, stdev=12.46, samples=19 00:59:57.318 lat (msec) : 250=34.21%, 500=63.16%, 750=2.63% 00:59:57.318 cpu : usr=97.78%, sys=1.60%, ctx=55, majf=0, minf=9 00:59:57.318 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829056: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=59, BW=239KiB/s (245kB/s)(2424KiB/10125msec) 00:59:57.318 slat (usec): min=5, max=100, avg=46.48, stdev=22.38 00:59:57.318 clat (msec): min=172, max=546, avg=266.63, stdev=54.83 00:59:57.318 lat (msec): min=172, max=546, avg=266.68, stdev=54.82 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 176], 5.00th=[ 192], 10.00th=[ 224], 20.00th=[ 243], 00:59:57.318 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 268], 00:59:57.318 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 317], 95.00th=[ 347], 00:59:57.318 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 550], 00:59:57.318 | 99.99th=[ 550] 00:59:57.318 bw ( KiB/s): min= 128, max= 368, per=3.78%, avg=248.42, stdev=47.81, samples=19 00:59:57.318 iops : min= 32, max= 92, avg=62.11, stdev=11.95, samples=19 00:59:57.318 lat (msec) : 250=36.63%, 500=60.73%, 750=2.64% 00:59:57.318 cpu : usr=98.03%, sys=1.44%, ctx=74, majf=0, minf=9 00:59:57.318 IO depths : 1=3.5%, 2=9.7%, 4=25.1%, 8=52.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 filename2: (groupid=0, jobs=1): err= 0: pid=829057: Mon Dec 9 05:54:50 2024 00:59:57.318 read: IOPS=90, BW=361KiB/s (370kB/s)(3664KiB/10142msec) 00:59:57.318 slat (nsec): min=6121, max=96190, avg=57479.61, stdev=14211.78 00:59:57.318 clat (msec): min=99, max=315, avg=175.57, stdev=29.33 00:59:57.318 lat (msec): min=99, max=315, avg=175.63, stdev=29.33 00:59:57.318 clat percentiles (msec): 00:59:57.318 | 1.00th=[ 100], 5.00th=[ 142], 10.00th=[ 153], 20.00th=[ 157], 00:59:57.318 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:59:57.318 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 203], 95.00th=[ 228], 00:59:57.318 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 317], 00:59:57.318 | 99.99th=[ 317] 00:59:57.318 bw ( KiB/s): min= 256, max= 432, per=5.49%, avg=360.00, stdev=43.59, samples=20 00:59:57.318 iops : min= 64, max= 108, avg=90.00, stdev=10.90, samples=20 00:59:57.318 lat (msec) : 100=1.75%, 250=96.29%, 500=1.97% 00:59:57.318 cpu : usr=98.20%, sys=1.32%, ctx=16, majf=0, minf=9 00:59:57.318 IO depths : 1=0.3%, 2=1.0%, 4=8.0%, 8=78.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:59:57.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 complete : 0=0.0%, 4=89.2%, 8=5.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:59:57.318 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:59:57.318 latency : target=0, window=0, percentile=100.00%, depth=16 00:59:57.318 00:59:57.318 Run status group 0 (all jobs): 00:59:57.318 READ: bw=6553KiB/s (6711kB/s), 239KiB/s-366KiB/s (245kB/s-375kB/s), io=64.9MiB (68.1MB), run=10038-10148msec 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:59:57.318 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 bdev_null0 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 [2024-12-09 05:54:50.602833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 bdev_null1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:57.319 { 00:59:57.319 "params": { 00:59:57.319 "name": "Nvme$subsystem", 00:59:57.319 "trtype": "$TEST_TRANSPORT", 00:59:57.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:57.319 "adrfam": "ipv4", 00:59:57.319 "trsvcid": "$NVMF_PORT", 00:59:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:57.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:57.319 "hdgst": ${hdgst:-false}, 00:59:57.319 "ddgst": ${ddgst:-false} 00:59:57.319 }, 00:59:57.319 "method": "bdev_nvme_attach_controller" 00:59:57.319 } 00:59:57.319 EOF 00:59:57.319 )") 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:59:57.319 { 00:59:57.319 "params": { 00:59:57.319 "name": "Nvme$subsystem", 00:59:57.319 "trtype": "$TEST_TRANSPORT", 00:59:57.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:59:57.319 "adrfam": "ipv4", 00:59:57.319 "trsvcid": "$NVMF_PORT", 00:59:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:59:57.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:59:57.319 "hdgst": ${hdgst:-false}, 00:59:57.319 "ddgst": ${ddgst:-false} 00:59:57.319 }, 00:59:57.319 "method": "bdev_nvme_attach_controller" 00:59:57.319 } 00:59:57.319 EOF 00:59:57.319 )") 00:59:57.319 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:59:57.320 "params": { 00:59:57.320 "name": "Nvme0", 00:59:57.320 "trtype": "tcp", 00:59:57.320 "traddr": "10.0.0.2", 00:59:57.320 "adrfam": "ipv4", 00:59:57.320 "trsvcid": "4420", 00:59:57.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:59:57.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:59:57.320 "hdgst": false, 00:59:57.320 "ddgst": false 00:59:57.320 }, 00:59:57.320 "method": "bdev_nvme_attach_controller" 00:59:57.320 },{ 00:59:57.320 "params": { 00:59:57.320 "name": "Nvme1", 00:59:57.320 "trtype": "tcp", 00:59:57.320 "traddr": "10.0.0.2", 00:59:57.320 "adrfam": "ipv4", 00:59:57.320 "trsvcid": "4420", 00:59:57.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:59:57.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:59:57.320 "hdgst": false, 00:59:57.320 "ddgst": false 00:59:57.320 }, 00:59:57.320 "method": "bdev_nvme_attach_controller" 00:59:57.320 }' 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:59:57.320 05:54:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:59:57.320 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:59:57.320 ... 00:59:57.320 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:59:57.320 ... 00:59:57.320 fio-3.35 00:59:57.320 Starting 4 threads 01:00:02.595 01:00:02.595 filename0: (groupid=0, jobs=1): err= 0: pid=830552: Mon Dec 9 05:54:56 2024 01:00:02.595 read: IOPS=1786, BW=14.0MiB/s (14.6MB/s)(69.9MiB/5004msec) 01:00:02.595 slat (nsec): min=6955, max=81635, avg=15911.54, stdev=8779.12 01:00:02.595 clat (usec): min=981, max=7969, avg=4420.09, stdev=711.30 01:00:02.595 lat (usec): min=995, max=7985, avg=4436.00, stdev=711.07 01:00:02.595 clat percentiles (usec): 01:00:02.595 | 1.00th=[ 2278], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4080], 01:00:02.595 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 01:00:02.595 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5866], 01:00:02.595 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 7701], 99.95th=[ 7767], 01:00:02.595 | 99.99th=[ 7963] 01:00:02.595 bw ( KiB/s): min=13856, max=14784, per=24.79%, avg=14294.40, stdev=252.67, samples=10 01:00:02.595 iops : min= 1732, max= 1848, avg=1786.80, stdev=31.58, samples=10 01:00:02.595 lat (usec) : 1000=0.04% 01:00:02.595 lat (msec) : 2=0.63%, 4=15.72%, 10=83.61% 01:00:02.595 cpu : usr=94.30%, sys=5.20%, ctx=8, majf=0, minf=0 01:00:02.595 IO depths : 1=0.5%, 2=12.6%, 4=60.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.595 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.595 issued rwts: total=8942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:02.595 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:02.595 filename0: (groupid=0, jobs=1): err= 0: pid=830553: Mon Dec 9 05:54:56 2024 01:00:02.595 read: IOPS=1849, BW=14.4MiB/s (15.1MB/s)(72.3MiB/5002msec) 01:00:02.595 slat (nsec): min=7099, max=94780, avg=16593.32, stdev=8193.20 01:00:02.595 clat (usec): min=908, max=8387, avg=4265.02, stdev=585.73 01:00:02.595 lat (usec): min=921, max=8400, avg=4281.61, stdev=586.71 01:00:02.595 clat percentiles (usec): 01:00:02.595 | 1.00th=[ 2409], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3916], 01:00:02.595 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 01:00:02.595 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5080], 01:00:02.595 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 7767], 01:00:02.595 | 99.99th=[ 8356] 01:00:02.595 bw ( KiB/s): min=14144, max=15230, per=25.65%, avg=14790.20, stdev=376.58, samples=10 01:00:02.595 iops : min= 1768, max= 1903, avg=1848.70, stdev=46.98, samples=10 01:00:02.595 lat (usec) : 1000=0.01% 01:00:02.595 lat (msec) : 2=0.32%, 4=23.64%, 10=76.02% 01:00:02.595 cpu : usr=89.82%, sys=7.62%, ctx=151, majf=0, minf=0 01:00:02.595 IO depths : 1=0.7%, 2=17.4%, 4=56.1%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:02.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.595 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.595 issued rwts: total=9250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:02.595 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:02.595 filename1: (groupid=0, jobs=1): err= 0: pid=830554: Mon Dec 9 05:54:56 2024 01:00:02.595 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5002msec) 01:00:02.595 slat (nsec): min=6819, max=65823, avg=14510.49, stdev=7755.23 01:00:02.595 clat (usec): min=863, max=7937, avg=4270.60, stdev=614.16 01:00:02.595 lat (usec): min=886, max=7950, avg=4285.11, stdev=614.80 01:00:02.595 clat percentiles (usec): 01:00:02.595 | 1.00th=[ 2376], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3916], 01:00:02.595 | 30.00th=[ 4113], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 01:00:02.595 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5145], 01:00:02.596 | 99.00th=[ 6456], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7832], 01:00:02.596 | 99.99th=[ 7963] 01:00:02.596 bw ( KiB/s): min=14208, max=15280, per=25.53%, avg=14721.78, stdev=359.68, samples=9 01:00:02.596 iops : min= 1776, max= 1910, avg=1840.22, stdev=44.96, samples=9 01:00:02.596 lat (usec) : 1000=0.04% 01:00:02.596 lat (msec) : 2=0.45%, 4=23.86%, 10=75.65% 01:00:02.596 cpu : usr=94.14%, sys=5.36%, ctx=10, majf=0, minf=9 01:00:02.596 IO depths : 1=0.5%, 2=14.0%, 4=57.8%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:02.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.596 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.596 issued rwts: total=9260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:02.596 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:02.596 filename1: (groupid=0, jobs=1): err= 0: pid=830555: Mon Dec 9 05:54:56 2024 01:00:02.596 read: IOPS=1721, BW=13.5MiB/s (14.1MB/s)(67.3MiB/5001msec) 01:00:02.596 slat (nsec): min=6512, max=81420, avg=15983.36, stdev=9054.69 01:00:02.596 clat (usec): min=846, max=8225, avg=4590.97, stdev=751.95 01:00:02.596 lat (usec): min=859, max=8234, avg=4606.96, stdev=750.96 01:00:02.596 clat percentiles (usec): 01:00:02.596 | 1.00th=[ 2835], 5.00th=[ 3752], 10.00th=[ 3982], 20.00th=[ 4228], 01:00:02.596 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 01:00:02.596 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5473], 95.00th=[ 6194], 01:00:02.596 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 7963], 99.95th=[ 7963], 01:00:02.596 | 99.99th=[ 8225] 01:00:02.596 bw ( KiB/s): min=13354, max=14208, per=23.94%, avg=13802.00, stdev=290.15, samples=9 01:00:02.596 iops : min= 1669, max= 1776, avg=1725.22, stdev=36.32, samples=9 01:00:02.596 lat (usec) : 1000=0.06% 01:00:02.596 lat (msec) : 2=0.41%, 4=9.58%, 10=89.95% 01:00:02.596 cpu : usr=94.66%, sys=4.86%, ctx=6, majf=0, minf=9 01:00:02.596 IO depths : 1=0.2%, 2=11.5%, 4=60.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:02.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.596 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:02.596 issued rwts: total=8611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:02.596 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:02.596 01:00:02.596 Run status group 0 (all jobs): 01:00:02.596 READ: bw=56.3MiB/s (59.0MB/s), 13.5MiB/s-14.5MiB/s (14.1MB/s-15.2MB/s), io=282MiB (295MB), run=5001-5004msec 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 01:00:03.163 real 0m24.821s 01:00:03.163 user 4m36.086s 01:00:03.163 sys 0m6.793s 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 ************************************ 01:00:03.163 END TEST fio_dif_rand_params 01:00:03.163 ************************************ 01:00:03.163 05:54:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:00:03.163 05:54:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:03.163 05:54:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 ************************************ 01:00:03.163 START TEST fio_dif_digest 01:00:03.163 ************************************ 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 bdev_null0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:03.163 [2024-12-09 05:54:57.201872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:00:03.163 { 01:00:03.163 "params": { 01:00:03.163 "name": "Nvme$subsystem", 01:00:03.163 "trtype": "$TEST_TRANSPORT", 01:00:03.163 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:03.163 "adrfam": "ipv4", 01:00:03.163 "trsvcid": "$NVMF_PORT", 01:00:03.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:03.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:03.163 "hdgst": ${hdgst:-false}, 01:00:03.163 "ddgst": ${ddgst:-false} 01:00:03.163 }, 01:00:03.163 "method": "bdev_nvme_attach_controller" 01:00:03.163 } 01:00:03.163 EOF 01:00:03.163 )") 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:03.163 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:00:03.164 "params": { 01:00:03.164 "name": "Nvme0", 01:00:03.164 "trtype": "tcp", 01:00:03.164 "traddr": "10.0.0.2", 01:00:03.164 "adrfam": "ipv4", 01:00:03.164 "trsvcid": "4420", 01:00:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:03.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:03.164 "hdgst": true, 01:00:03.164 "ddgst": true 01:00:03.164 }, 01:00:03.164 "method": "bdev_nvme_attach_controller" 01:00:03.164 }' 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:03.164 05:54:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:03.422 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:00:03.422 ... 01:00:03.422 fio-3.35 01:00:03.422 Starting 3 threads 01:00:15.715 01:00:15.715 filename0: (groupid=0, jobs=1): err= 0: pid=831315: Mon Dec 9 05:55:08 2024 01:00:15.715 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10047msec) 01:00:15.715 slat (nsec): min=4306, max=39355, avg=16733.41, stdev=3243.34 01:00:15.715 clat (usec): min=11601, max=54953, avg=14946.04, stdev=1537.69 01:00:15.715 lat (usec): min=11615, max=54967, avg=14962.77, stdev=1537.69 01:00:15.715 clat percentiles (usec): 01:00:15.715 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 01:00:15.715 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 01:00:15.715 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 01:00:15.715 | 99.00th=[17695], 99.50th=[17957], 99.90th=[19530], 99.95th=[48497], 01:00:15.715 | 99.99th=[54789] 01:00:15.715 bw ( KiB/s): min=24832, max=26368, per=33.10%, avg=25702.40, stdev=393.10, samples=20 01:00:15.715 iops : min= 194, max= 206, avg=200.80, stdev= 3.07, samples=20 01:00:15.715 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 01:00:15.715 cpu : usr=95.21%, sys=4.27%, ctx=23, majf=0, minf=194 01:00:15.715 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:15.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.715 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:15.715 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:15.715 filename0: (groupid=0, jobs=1): err= 0: pid=831316: Mon Dec 9 05:55:08 2024 01:00:15.715 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10046msec) 01:00:15.715 slat (nsec): min=4275, max=39282, avg=16104.85, stdev=3162.77 01:00:15.715 clat (usec): min=12108, max=52179, avg=15210.83, stdev=1511.24 01:00:15.715 lat (usec): min=12122, max=52198, avg=15226.93, stdev=1511.58 01:00:15.716 clat percentiles (usec): 01:00:15.716 | 1.00th=[12780], 5.00th=[13698], 10.00th=[13960], 20.00th=[14353], 01:00:15.716 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 01:00:15.716 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 01:00:15.716 | 99.00th=[18220], 99.50th=[18482], 99.90th=[47973], 99.95th=[52167], 01:00:15.716 | 99.99th=[52167] 01:00:15.716 bw ( KiB/s): min=24320, max=26368, per=32.54%, avg=25267.20, stdev=664.98, samples=20 01:00:15.716 iops : min= 190, max= 206, avg=197.40, stdev= 5.20, samples=20 01:00:15.716 lat (msec) : 20=99.85%, 50=0.10%, 100=0.05% 01:00:15.716 cpu : usr=95.36%, sys=4.15%, ctx=14, majf=0, minf=174 01:00:15.716 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:15.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.716 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:15.716 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:15.716 filename0: (groupid=0, jobs=1): err= 0: pid=831317: Mon Dec 9 05:55:08 2024 01:00:15.716 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10048msec) 01:00:15.716 slat (nsec): min=4346, max=43546, avg=17445.32, stdev=3600.76 01:00:15.716 clat (usec): min=10444, max=54122, avg=14251.70, stdev=1533.88 01:00:15.716 lat (usec): min=10464, max=54142, avg=14269.14, stdev=1533.84 01:00:15.716 clat percentiles (usec): 01:00:15.716 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 01:00:15.716 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 01:00:15.716 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 01:00:15.716 | 99.00th=[16909], 99.50th=[17171], 99.90th=[21627], 99.95th=[47973], 01:00:15.716 | 99.99th=[54264] 01:00:15.716 bw ( KiB/s): min=26112, max=27904, per=34.71%, avg=26956.80, stdev=505.90, samples=20 01:00:15.716 iops : min= 204, max= 218, avg=210.60, stdev= 3.95, samples=20 01:00:15.716 lat (msec) : 20=99.76%, 50=0.19%, 100=0.05% 01:00:15.716 cpu : usr=94.48%, sys=5.05%, ctx=20, majf=0, minf=163 01:00:15.716 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:15.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:15.716 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:15.716 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:15.716 01:00:15.716 Run status group 0 (all jobs): 01:00:15.716 READ: bw=75.8MiB/s (79.5MB/s), 24.6MiB/s-26.2MiB/s (25.8MB/s-27.5MB/s), io=762MiB (799MB), run=10046-10048msec 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:15.716 01:00:15.716 real 0m11.227s 01:00:15.716 user 0m29.797s 01:00:15.716 sys 0m1.616s 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:15.716 05:55:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:15.716 ************************************ 01:00:15.716 END TEST fio_dif_digest 01:00:15.716 ************************************ 01:00:15.716 05:55:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:00:15.716 05:55:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:15.716 rmmod nvme_tcp 01:00:15.716 rmmod nvme_fabrics 01:00:15.716 rmmod nvme_keyring 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 825130 ']' 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 825130 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 825130 ']' 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 825130 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825130 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825130' 01:00:15.716 killing process with pid 825130 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@973 -- # kill 825130 01:00:15.716 05:55:08 nvmf_dif -- common/autotest_common.sh@978 -- # wait 825130 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:00:15.716 05:55:08 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:00:15.716 Waiting for block devices as requested 01:00:15.973 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 01:00:15.973 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:16.231 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:16.231 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:16.231 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:16.231 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:16.490 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:16.490 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:16.490 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:16.490 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:16.749 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:16.749 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:16.749 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:16.749 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:17.009 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:17.009 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:17.009 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 01:00:17.270 05:55:11 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:17.270 05:55:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:00:17.270 05:55:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:19.180 05:55:13 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:00:19.180 01:00:19.180 real 1m7.763s 01:00:19.180 user 6m34.644s 01:00:19.180 sys 0m17.442s 01:00:19.180 05:55:13 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:19.180 05:55:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:19.180 ************************************ 01:00:19.180 END TEST nvmf_dif 01:00:19.180 ************************************ 01:00:19.180 05:55:13 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:00:19.180 05:55:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:19.180 05:55:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:19.180 05:55:13 -- common/autotest_common.sh@10 -- # set +x 01:00:19.180 ************************************ 01:00:19.180 START TEST nvmf_abort_qd_sizes 01:00:19.180 ************************************ 01:00:19.180 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:00:19.439 * Looking for test storage... 01:00:19.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:00:19.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.440 --rc genhtml_branch_coverage=1 01:00:19.440 --rc genhtml_function_coverage=1 01:00:19.440 --rc genhtml_legend=1 01:00:19.440 --rc geninfo_all_blocks=1 01:00:19.440 --rc geninfo_unexecuted_blocks=1 01:00:19.440 01:00:19.440 ' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:00:19.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.440 --rc genhtml_branch_coverage=1 01:00:19.440 --rc genhtml_function_coverage=1 01:00:19.440 --rc genhtml_legend=1 01:00:19.440 --rc geninfo_all_blocks=1 01:00:19.440 --rc geninfo_unexecuted_blocks=1 01:00:19.440 01:00:19.440 ' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:00:19.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.440 --rc genhtml_branch_coverage=1 01:00:19.440 --rc genhtml_function_coverage=1 01:00:19.440 --rc genhtml_legend=1 01:00:19.440 --rc geninfo_all_blocks=1 01:00:19.440 --rc geninfo_unexecuted_blocks=1 01:00:19.440 01:00:19.440 ' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:00:19.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:19.440 --rc genhtml_branch_coverage=1 01:00:19.440 --rc genhtml_function_coverage=1 01:00:19.440 --rc genhtml_legend=1 01:00:19.440 --rc geninfo_all_blocks=1 01:00:19.440 --rc geninfo_unexecuted_blocks=1 01:00:19.440 01:00:19.440 ' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:19.440 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:00:19.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 01:00:19.441 05:55:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 01:00:21.972 Found 0000:0a:00.0 (0x8086 - 0x159b) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 01:00:21.972 Found 0000:0a:00.1 (0x8086 - 0x159b) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 01:00:21.972 Found net devices under 0000:0a:00.0: cvl_0_0 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 01:00:21.972 Found net devices under 0000:0a:00.1: cvl_0_1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:00:21.972 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 01:00:21.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:21.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 01:00:21.973 01:00:21.973 --- 10.0.0.2 ping statistics --- 01:00:21.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:21.973 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:00:21.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:21.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 01:00:21.973 01:00:21.973 --- 10.0.0.1 ping statistics --- 01:00:21.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:21.973 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:00:21.973 05:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:00:22.906 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 01:00:22.906 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 01:00:22.906 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 01:00:23.857 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:00:23.857 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=836237 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 836237 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 836237 ']' 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:24.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:24.114 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:24.114 [2024-12-09 05:55:18.143733] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:00:24.114 [2024-12-09 05:55:18.143819] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:24.114 [2024-12-09 05:55:18.218014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:00:24.114 [2024-12-09 05:55:18.278620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:24.115 [2024-12-09 05:55:18.278678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:24.115 [2024-12-09 05:55:18.278701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:24.115 [2024-12-09 05:55:18.278712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:24.115 [2024-12-09 05:55:18.278722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:24.115 [2024-12-09 05:55:18.280189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:24.115 [2024-12-09 05:55:18.280253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:00:24.115 [2024-12-09 05:55:18.280323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:00:24.115 [2024-12-09 05:55:18.280327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:24.371 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:24.372 05:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:24.372 ************************************ 01:00:24.372 START TEST spdk_target_abort 01:00:24.372 ************************************ 01:00:24.372 05:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:00:24.372 05:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:00:24.372 05:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 01:00:24.372 05:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:24.372 05:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:27.653 spdk_targetn1 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:27.653 [2024-12-09 05:55:21.280822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:27.653 [2024-12-09 05:55:21.321117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:27.653 05:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:30.934 Initializing NVMe Controllers 01:00:30.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:00:30.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:30.934 Initialization complete. Launching workers. 01:00:30.934 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10904, failed: 0 01:00:30.934 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 9660 01:00:30.934 success 691, unsuccessful 553, failed 0 01:00:30.934 05:55:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:30.934 05:55:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:34.208 Initializing NVMe Controllers 01:00:34.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:00:34.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:34.208 Initialization complete. Launching workers. 01:00:34.208 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8684, failed: 0 01:00:34.208 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7423 01:00:34.208 success 341, unsuccessful 920, failed 0 01:00:34.208 05:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:34.209 05:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:37.489 Initializing NVMe Controllers 01:00:37.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:00:37.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:37.489 Initialization complete. Launching workers. 01:00:37.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31385, failed: 0 01:00:37.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2821, failed to submit 28564 01:00:37.489 success 478, unsuccessful 2343, failed 0 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:37.489 05:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 836237 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 836237 ']' 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 836237 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 836237 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 836237' 01:00:38.862 killing process with pid 836237 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 836237 01:00:38.862 05:55:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 836237 01:00:39.120 01:00:39.120 real 0m14.702s 01:00:39.120 user 0m55.513s 01:00:39.120 sys 0m2.727s 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:39.120 ************************************ 01:00:39.120 END TEST spdk_target_abort 01:00:39.120 ************************************ 01:00:39.120 05:55:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:00:39.120 05:55:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:39.120 05:55:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:39.120 05:55:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:39.120 ************************************ 01:00:39.120 START TEST kernel_target_abort 01:00:39.120 ************************************ 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:00:39.120 05:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:00:40.497 Waiting for block devices as requested 01:00:40.497 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 01:00:40.497 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:40.497 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:40.756 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:40.756 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:40.756 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:41.016 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:41.016 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:41.016 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:41.016 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:41.281 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:41.281 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:41.281 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:41.281 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:41.539 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:41.539 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:41.539 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:00:41.797 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 01:00:41.797 No valid GPT data, bailing 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 01:00:41.798 01:00:41.798 Discovery Log Number of Records 2, Generation counter 2 01:00:41.798 =====Discovery Log Entry 0====== 01:00:41.798 trtype: tcp 01:00:41.798 adrfam: ipv4 01:00:41.798 subtype: current discovery subsystem 01:00:41.798 treq: not specified, sq flow control disable supported 01:00:41.798 portid: 1 01:00:41.798 trsvcid: 4420 01:00:41.798 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:00:41.798 traddr: 10.0.0.1 01:00:41.798 eflags: none 01:00:41.798 sectype: none 01:00:41.798 =====Discovery Log Entry 1====== 01:00:41.798 trtype: tcp 01:00:41.798 adrfam: ipv4 01:00:41.798 subtype: nvme subsystem 01:00:41.798 treq: not specified, sq flow control disable supported 01:00:41.798 portid: 1 01:00:41.798 trsvcid: 4420 01:00:41.798 subnqn: nqn.2016-06.io.spdk:testnqn 01:00:41.798 traddr: 10.0.0.1 01:00:41.798 eflags: none 01:00:41.798 sectype: none 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:41.798 05:55:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:45.075 Initializing NVMe Controllers 01:00:45.075 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:00:45.075 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:45.075 Initialization complete. Launching workers. 01:00:45.075 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56260, failed: 0 01:00:45.075 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56260, failed to submit 0 01:00:45.075 success 0, unsuccessful 56260, failed 0 01:00:45.075 05:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:45.075 05:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:48.351 Initializing NVMe Controllers 01:00:48.351 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:00:48.351 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:48.351 Initialization complete. Launching workers. 01:00:48.351 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98088, failed: 0 01:00:48.351 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24710, failed to submit 73378 01:00:48.351 success 0, unsuccessful 24710, failed 0 01:00:48.351 05:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:00:48.351 05:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:00:51.638 Initializing NVMe Controllers 01:00:51.639 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:00:51.639 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:00:51.639 Initialization complete. Launching workers. 01:00:51.639 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96827, failed: 0 01:00:51.639 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24210, failed to submit 72617 01:00:51.639 success 0, unsuccessful 24210, failed 0 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:00:51.639 05:55:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:00:52.579 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 01:00:52.579 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 01:00:52.579 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 01:00:52.579 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 01:00:52.837 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 01:00:53.768 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 01:00:53.768 01:00:53.768 real 0m14.691s 01:00:53.768 user 0m6.781s 01:00:53.768 sys 0m3.394s 01:00:53.768 05:55:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:53.768 05:55:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:00:53.768 ************************************ 01:00:53.768 END TEST kernel_target_abort 01:00:53.768 ************************************ 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:00:53.768 rmmod nvme_tcp 01:00:53.768 rmmod nvme_fabrics 01:00:53.768 rmmod nvme_keyring 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 836237 ']' 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 836237 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 836237 ']' 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 836237 01:00:53.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (836237) - No such process 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 836237 is not found' 01:00:53.768 Process with pid 836237 is not found 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:00:53.768 05:55:47 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:00:55.142 Waiting for block devices as requested 01:00:55.142 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 01:00:55.142 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:55.400 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:55.400 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:55.401 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:55.659 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:55.659 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:55.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:55.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:55.916 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 01:00:55.916 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 01:00:55.916 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 01:00:55.916 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 01:00:56.176 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 01:00:56.176 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 01:00:56.176 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 01:00:56.176 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:00:56.435 05:55:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:00:58.338 05:55:52 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 01:00:58.338 01:00:58.338 real 0m39.172s 01:00:58.338 user 1m4.603s 01:00:58.338 sys 0m9.739s 01:00:58.338 05:55:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:00:58.338 05:55:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:00:58.597 ************************************ 01:00:58.597 END TEST nvmf_abort_qd_sizes 01:00:58.597 ************************************ 01:00:58.597 05:55:52 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:00:58.597 05:55:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:00:58.597 05:55:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:00:58.597 05:55:52 -- common/autotest_common.sh@10 -- # set +x 01:00:58.597 ************************************ 01:00:58.597 START TEST keyring_file 01:00:58.597 ************************************ 01:00:58.597 05:55:52 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:00:58.597 * Looking for test storage... 01:00:58.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:00:58.597 05:55:52 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:00:58.597 05:55:52 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 01:00:58.597 05:55:52 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:00:58.597 05:55:52 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@345 -- # : 1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@353 -- # local d=1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@355 -- # echo 1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:00:58.597 05:55:52 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@353 -- # local d=2 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@355 -- # echo 2 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@368 -- # return 0 01:00:58.598 05:55:52 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:00:58.598 05:55:52 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:00:58.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:58.598 --rc genhtml_branch_coverage=1 01:00:58.598 --rc genhtml_function_coverage=1 01:00:58.598 --rc genhtml_legend=1 01:00:58.598 --rc geninfo_all_blocks=1 01:00:58.598 --rc geninfo_unexecuted_blocks=1 01:00:58.598 01:00:58.598 ' 01:00:58.598 05:55:52 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:00:58.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:58.598 --rc genhtml_branch_coverage=1 01:00:58.598 --rc genhtml_function_coverage=1 01:00:58.598 --rc genhtml_legend=1 01:00:58.598 --rc geninfo_all_blocks=1 01:00:58.598 --rc geninfo_unexecuted_blocks=1 01:00:58.598 01:00:58.598 ' 01:00:58.598 05:55:52 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:00:58.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:58.598 --rc genhtml_branch_coverage=1 01:00:58.598 --rc genhtml_function_coverage=1 01:00:58.598 --rc genhtml_legend=1 01:00:58.598 --rc geninfo_all_blocks=1 01:00:58.598 --rc geninfo_unexecuted_blocks=1 01:00:58.598 01:00:58.598 ' 01:00:58.598 05:55:52 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:00:58.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:00:58.598 --rc genhtml_branch_coverage=1 01:00:58.598 --rc genhtml_function_coverage=1 01:00:58.598 --rc genhtml_legend=1 01:00:58.598 --rc geninfo_all_blocks=1 01:00:58.598 --rc geninfo_unexecuted_blocks=1 01:00:58.598 01:00:58.598 ' 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:00:58.598 05:55:52 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:00:58.598 05:55:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.598 05:55:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.598 05:55:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.598 05:55:52 keyring_file -- paths/export.sh@5 -- # export PATH 01:00:58.598 05:55:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@51 -- # : 0 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:00:58.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:00:58.598 05:55:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@17 -- # name=key0 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@17 -- # digest=0 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@18 -- # mktemp 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CfjVSelas5 01:00:58.598 05:55:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:00:58.598 05:55:52 keyring_file -- nvmf/common.sh@733 -- # python - 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CfjVSelas5 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CfjVSelas5 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CfjVSelas5 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@17 -- # name=key1 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@17 -- # digest=0 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@18 -- # mktemp 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.r0uvcGTITf 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:00:58.857 05:55:52 keyring_file -- nvmf/common.sh@733 -- # python - 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.r0uvcGTITf 01:00:58.857 05:55:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.r0uvcGTITf 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.r0uvcGTITf 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=842156 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:00:58.857 05:55:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 842156 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 842156 ']' 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:58.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:58.857 05:55:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:00:58.857 [2024-12-09 05:55:52.937415] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:00:58.857 [2024-12-09 05:55:52.937521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842156 ] 01:00:58.857 [2024-12-09 05:55:53.003007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:58.857 [2024-12-09 05:55:53.061151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:00:59.116 05:55:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:59.116 05:55:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:00:59.116 05:55:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:00:59.116 05:55:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:59.116 05:55:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:00:59.374 [2024-12-09 05:55:53.341751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:59.374 null0 01:00:59.374 [2024-12-09 05:55:53.373774] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:00:59.374 [2024-12-09 05:55:53.374180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:59.374 05:55:53 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:00:59.374 [2024-12-09 05:55:53.397822] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:00:59.374 request: 01:00:59.374 { 01:00:59.374 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:00:59.374 "secure_channel": false, 01:00:59.374 "listen_address": { 01:00:59.374 "trtype": "tcp", 01:00:59.374 "traddr": "127.0.0.1", 01:00:59.374 "trsvcid": "4420" 01:00:59.374 }, 01:00:59.374 "method": "nvmf_subsystem_add_listener", 01:00:59.374 "req_id": 1 01:00:59.374 } 01:00:59.374 Got JSON-RPC error response 01:00:59.374 response: 01:00:59.374 { 01:00:59.374 "code": -32602, 01:00:59.374 "message": "Invalid parameters" 01:00:59.374 } 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:00:59.374 05:55:53 keyring_file -- keyring/file.sh@47 -- # bperfpid=842168 01:00:59.374 05:55:53 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:00:59.374 05:55:53 keyring_file -- keyring/file.sh@49 -- # waitforlisten 842168 /var/tmp/bperf.sock 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 842168 ']' 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:00:59.374 05:55:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:00:59.375 05:55:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:00:59.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:00:59.375 05:55:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:00:59.375 05:55:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:00:59.375 [2024-12-09 05:55:53.447800] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:00:59.375 [2024-12-09 05:55:53.447871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842168 ] 01:00:59.375 [2024-12-09 05:55:53.517684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:59.375 [2024-12-09 05:55:53.577071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:00:59.633 05:55:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:00:59.633 05:55:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:00:59.633 05:55:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:00:59.633 05:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:00:59.890 05:55:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.r0uvcGTITf 01:00:59.890 05:55:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.r0uvcGTITf 01:01:00.148 05:55:54 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:01:00.148 05:55:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:01:00.148 05:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:00.148 05:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:00.148 05:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:00.405 05:55:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.CfjVSelas5 == \/\t\m\p\/\t\m\p\.\C\f\j\V\S\e\l\a\s\5 ]] 01:01:00.405 05:55:54 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:01:00.405 05:55:54 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:01:00.405 05:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:00.405 05:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:00.405 05:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:00.663 05:55:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.r0uvcGTITf == \/\t\m\p\/\t\m\p\.\r\0\u\v\c\G\T\I\T\f ]] 01:01:00.663 05:55:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:01:00.663 05:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:00.663 05:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:00.663 05:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:00.663 05:55:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:00.663 05:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:00.920 05:55:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:01:00.920 05:55:55 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:01:00.920 05:55:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:01:00.920 05:55:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:00.920 05:55:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:00.920 05:55:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:00.920 05:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:01.176 05:55:55 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:01:01.176 05:55:55 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:01.176 05:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:01.433 [2024-12-09 05:55:55.553976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:01:01.433 nvme0n1 01:01:01.433 05:55:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:01:01.433 05:55:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:01.433 05:55:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:01.433 05:55:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:01.433 05:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:01.433 05:55:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:01.996 05:55:55 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:01:01.996 05:55:55 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:01:01.996 05:55:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:01:01.996 05:55:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:01.996 05:55:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:01.996 05:55:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:01.997 05:55:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:01.997 05:55:56 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:01:01.997 05:55:56 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:01:02.253 Running I/O for 1 seconds... 01:01:03.182 10427.00 IOPS, 40.73 MiB/s 01:01:03.182 Latency(us) 01:01:03.182 [2024-12-09T04:55:57.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:03.182 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:01:03.182 nvme0n1 : 1.01 10473.07 40.91 0.00 0.00 12181.73 3956.43 17573.36 01:01:03.182 [2024-12-09T04:55:57.407Z] =================================================================================================================== 01:01:03.182 [2024-12-09T04:55:57.407Z] Total : 10473.07 40.91 0.00 0.00 12181.73 3956.43 17573.36 01:01:03.182 { 01:01:03.182 "results": [ 01:01:03.182 { 01:01:03.182 "job": "nvme0n1", 01:01:03.182 "core_mask": "0x2", 01:01:03.182 "workload": "randrw", 01:01:03.182 "percentage": 50, 01:01:03.182 "status": "finished", 01:01:03.182 "queue_depth": 128, 01:01:03.182 "io_size": 4096, 01:01:03.182 "runtime": 1.007918, 01:01:03.182 "iops": 10473.074198496306, 01:01:03.182 "mibps": 40.910446087876196, 01:01:03.182 "io_failed": 0, 01:01:03.182 "io_timeout": 0, 01:01:03.182 "avg_latency_us": 12181.729227400952, 01:01:03.183 "min_latency_us": 3956.4325925925928, 01:01:03.183 "max_latency_us": 17573.357037037036 01:01:03.183 } 01:01:03.183 ], 01:01:03.183 "core_count": 1 01:01:03.183 } 01:01:03.183 05:55:57 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:01:03.183 05:55:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:01:03.439 05:55:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:01:03.439 05:55:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:03.439 05:55:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:03.439 05:55:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:03.439 05:55:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:03.439 05:55:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:03.695 05:55:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:01:03.695 05:55:57 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:01:03.695 05:55:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:01:03.695 05:55:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:03.695 05:55:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:03.695 05:55:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:03.695 05:55:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:03.951 05:55:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:01:03.951 05:55:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:03.951 05:55:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:01:03.951 05:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:01:04.233 [2024-12-09 05:55:58.413728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:01:04.233 [2024-12-09 05:55:58.414357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97530 (107): Transport endpoint is not connected 01:01:04.233 [2024-12-09 05:55:58.415347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97530 (9): Bad file descriptor 01:01:04.233 [2024-12-09 05:55:58.416347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:01:04.233 [2024-12-09 05:55:58.416365] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:01:04.233 [2024-12-09 05:55:58.416377] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:01:04.233 [2024-12-09 05:55:58.416391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:01:04.233 request: 01:01:04.233 { 01:01:04.233 "name": "nvme0", 01:01:04.233 "trtype": "tcp", 01:01:04.233 "traddr": "127.0.0.1", 01:01:04.233 "adrfam": "ipv4", 01:01:04.233 "trsvcid": "4420", 01:01:04.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:04.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:04.233 "prchk_reftag": false, 01:01:04.233 "prchk_guard": false, 01:01:04.233 "hdgst": false, 01:01:04.233 "ddgst": false, 01:01:04.233 "psk": "key1", 01:01:04.233 "allow_unrecognized_csi": false, 01:01:04.233 "method": "bdev_nvme_attach_controller", 01:01:04.233 "req_id": 1 01:01:04.233 } 01:01:04.233 Got JSON-RPC error response 01:01:04.233 response: 01:01:04.233 { 01:01:04.233 "code": -5, 01:01:04.233 "message": "Input/output error" 01:01:04.233 } 01:01:04.233 05:55:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:01:04.233 05:55:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:04.233 05:55:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:04.233 05:55:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:04.233 05:55:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:01:04.233 05:55:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:04.233 05:55:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:04.234 05:55:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:04.234 05:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:04.234 05:55:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:04.526 05:55:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:01:04.526 05:55:58 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:01:04.526 05:55:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:01:04.526 05:55:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:04.526 05:55:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:04.526 05:55:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:04.526 05:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:04.799 05:55:58 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:01:04.799 05:55:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:01:04.799 05:55:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:01:05.056 05:55:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:01:05.056 05:55:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:01:05.313 05:55:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:01:05.313 05:55:59 keyring_file -- keyring/file.sh@78 -- # jq length 01:01:05.313 05:55:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:05.877 05:55:59 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:01:05.877 05:55:59 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.CfjVSelas5 01:01:05.877 05:55:59 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:05.877 05:55:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:05.877 05:55:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:05.877 [2024-12-09 05:56:00.054418] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CfjVSelas5': 0100660 01:01:05.877 [2024-12-09 05:56:00.054470] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:01:05.877 request: 01:01:05.877 { 01:01:05.877 "name": "key0", 01:01:05.877 "path": "/tmp/tmp.CfjVSelas5", 01:01:05.877 "method": "keyring_file_add_key", 01:01:05.877 "req_id": 1 01:01:05.877 } 01:01:05.877 Got JSON-RPC error response 01:01:05.877 response: 01:01:05.877 { 01:01:05.877 "code": -1, 01:01:05.877 "message": "Operation not permitted" 01:01:05.877 } 01:01:05.877 05:56:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:01:05.877 05:56:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:05.877 05:56:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:05.877 05:56:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:05.877 05:56:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.CfjVSelas5 01:01:05.877 05:56:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:05.877 05:56:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CfjVSelas5 01:01:06.442 05:56:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.CfjVSelas5 01:01:06.442 05:56:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:06.442 05:56:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:01:06.442 05:56:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:06.442 05:56:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:06.442 05:56:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:06.700 [2024-12-09 05:56:00.908757] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CfjVSelas5': No such file or directory 01:01:06.700 [2024-12-09 05:56:00.908796] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:01:06.700 [2024-12-09 05:56:00.908819] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:01:06.700 [2024-12-09 05:56:00.908831] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:01:06.700 [2024-12-09 05:56:00.908843] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:01:06.700 [2024-12-09 05:56:00.908853] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:01:06.700 request: 01:01:06.700 { 01:01:06.700 "name": "nvme0", 01:01:06.700 "trtype": "tcp", 01:01:06.700 "traddr": "127.0.0.1", 01:01:06.700 "adrfam": "ipv4", 01:01:06.700 "trsvcid": "4420", 01:01:06.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:06.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:06.700 "prchk_reftag": false, 01:01:06.700 "prchk_guard": false, 01:01:06.700 "hdgst": false, 01:01:06.700 "ddgst": false, 01:01:06.700 "psk": "key0", 01:01:06.700 "allow_unrecognized_csi": false, 01:01:06.700 "method": "bdev_nvme_attach_controller", 01:01:06.700 "req_id": 1 01:01:06.700 } 01:01:06.700 Got JSON-RPC error response 01:01:06.700 response: 01:01:06.700 { 01:01:06.700 "code": -19, 01:01:06.700 "message": "No such device" 01:01:06.700 } 01:01:06.958 05:56:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:01:06.958 05:56:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:06.958 05:56:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:06.958 05:56:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:06.958 05:56:00 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:01:06.958 05:56:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:01:07.216 05:56:01 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@17 -- # name=key0 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@17 -- # digest=0 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@18 -- # mktemp 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nJzWAlsqKR 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:01:07.216 05:56:01 keyring_file -- nvmf/common.sh@733 -- # python - 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nJzWAlsqKR 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nJzWAlsqKR 01:01:07.216 05:56:01 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nJzWAlsqKR 01:01:07.216 05:56:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nJzWAlsqKR 01:01:07.216 05:56:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nJzWAlsqKR 01:01:07.474 05:56:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:07.474 05:56:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:07.732 nvme0n1 01:01:07.732 05:56:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:01:07.732 05:56:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:07.732 05:56:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:07.732 05:56:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:07.732 05:56:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:07.732 05:56:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:07.990 05:56:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:01:07.990 05:56:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:01:07.990 05:56:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:01:08.248 05:56:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:01:08.248 05:56:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:01:08.248 05:56:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:08.248 05:56:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:08.248 05:56:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:08.506 05:56:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:01:08.506 05:56:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:01:08.506 05:56:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:08.506 05:56:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:08.506 05:56:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:08.506 05:56:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:08.506 05:56:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:08.763 05:56:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:01:08.763 05:56:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:01:08.763 05:56:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:01:09.021 05:56:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:01:09.021 05:56:03 keyring_file -- keyring/file.sh@105 -- # jq length 01:01:09.021 05:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:09.278 05:56:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:01:09.278 05:56:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nJzWAlsqKR 01:01:09.278 05:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nJzWAlsqKR 01:01:09.844 05:56:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.r0uvcGTITf 01:01:09.844 05:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.r0uvcGTITf 01:01:09.844 05:56:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:09.844 05:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:01:10.410 nvme0n1 01:01:10.410 05:56:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:01:10.410 05:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:01:10.669 05:56:04 keyring_file -- keyring/file.sh@113 -- # config='{ 01:01:10.669 "subsystems": [ 01:01:10.669 { 01:01:10.669 "subsystem": "keyring", 01:01:10.669 "config": [ 01:01:10.669 { 01:01:10.669 "method": "keyring_file_add_key", 01:01:10.669 "params": { 01:01:10.669 "name": "key0", 01:01:10.669 "path": "/tmp/tmp.nJzWAlsqKR" 01:01:10.669 } 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "method": "keyring_file_add_key", 01:01:10.669 "params": { 01:01:10.669 "name": "key1", 01:01:10.669 "path": "/tmp/tmp.r0uvcGTITf" 01:01:10.669 } 01:01:10.669 } 01:01:10.669 ] 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "subsystem": "iobuf", 01:01:10.669 "config": [ 01:01:10.669 { 01:01:10.669 "method": "iobuf_set_options", 01:01:10.669 "params": { 01:01:10.669 "small_pool_count": 8192, 01:01:10.669 "large_pool_count": 1024, 01:01:10.669 "small_bufsize": 8192, 01:01:10.669 "large_bufsize": 135168, 01:01:10.669 "enable_numa": false 01:01:10.669 } 01:01:10.669 } 01:01:10.669 ] 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "subsystem": "sock", 01:01:10.669 "config": [ 01:01:10.669 { 01:01:10.669 "method": "sock_set_default_impl", 01:01:10.669 "params": { 01:01:10.669 "impl_name": "posix" 01:01:10.669 } 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "method": "sock_impl_set_options", 01:01:10.669 "params": { 01:01:10.669 "impl_name": "ssl", 01:01:10.669 "recv_buf_size": 4096, 01:01:10.669 "send_buf_size": 4096, 01:01:10.669 "enable_recv_pipe": true, 01:01:10.669 "enable_quickack": false, 01:01:10.669 "enable_placement_id": 0, 01:01:10.669 "enable_zerocopy_send_server": true, 01:01:10.669 "enable_zerocopy_send_client": false, 01:01:10.669 "zerocopy_threshold": 0, 01:01:10.669 "tls_version": 0, 01:01:10.669 "enable_ktls": false 01:01:10.669 } 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "method": "sock_impl_set_options", 01:01:10.669 "params": { 01:01:10.669 "impl_name": "posix", 01:01:10.669 "recv_buf_size": 2097152, 01:01:10.669 "send_buf_size": 2097152, 01:01:10.669 "enable_recv_pipe": true, 01:01:10.669 "enable_quickack": false, 01:01:10.669 "enable_placement_id": 0, 01:01:10.669 "enable_zerocopy_send_server": true, 01:01:10.669 "enable_zerocopy_send_client": false, 01:01:10.669 "zerocopy_threshold": 0, 01:01:10.669 "tls_version": 0, 01:01:10.669 "enable_ktls": false 01:01:10.669 } 01:01:10.669 } 01:01:10.669 ] 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "subsystem": "vmd", 01:01:10.669 "config": [] 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "subsystem": "accel", 01:01:10.669 "config": [ 01:01:10.669 { 01:01:10.669 "method": "accel_set_options", 01:01:10.669 "params": { 01:01:10.669 "small_cache_size": 128, 01:01:10.669 "large_cache_size": 16, 01:01:10.669 "task_count": 2048, 01:01:10.669 "sequence_count": 2048, 01:01:10.669 "buf_count": 2048 01:01:10.669 } 01:01:10.669 } 01:01:10.669 ] 01:01:10.669 }, 01:01:10.669 { 01:01:10.669 "subsystem": "bdev", 01:01:10.670 "config": [ 01:01:10.670 { 01:01:10.670 "method": "bdev_set_options", 01:01:10.670 "params": { 01:01:10.670 "bdev_io_pool_size": 65535, 01:01:10.670 "bdev_io_cache_size": 256, 01:01:10.670 "bdev_auto_examine": true, 01:01:10.670 "iobuf_small_cache_size": 128, 01:01:10.670 "iobuf_large_cache_size": 16 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_raid_set_options", 01:01:10.670 "params": { 01:01:10.670 "process_window_size_kb": 1024, 01:01:10.670 "process_max_bandwidth_mb_sec": 0 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_iscsi_set_options", 01:01:10.670 "params": { 01:01:10.670 "timeout_sec": 30 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_nvme_set_options", 01:01:10.670 "params": { 01:01:10.670 "action_on_timeout": "none", 01:01:10.670 "timeout_us": 0, 01:01:10.670 "timeout_admin_us": 0, 01:01:10.670 "keep_alive_timeout_ms": 10000, 01:01:10.670 "arbitration_burst": 0, 01:01:10.670 "low_priority_weight": 0, 01:01:10.670 "medium_priority_weight": 0, 01:01:10.670 "high_priority_weight": 0, 01:01:10.670 "nvme_adminq_poll_period_us": 10000, 01:01:10.670 "nvme_ioq_poll_period_us": 0, 01:01:10.670 "io_queue_requests": 512, 01:01:10.670 "delay_cmd_submit": true, 01:01:10.670 "transport_retry_count": 4, 01:01:10.670 "bdev_retry_count": 3, 01:01:10.670 "transport_ack_timeout": 0, 01:01:10.670 "ctrlr_loss_timeout_sec": 0, 01:01:10.670 "reconnect_delay_sec": 0, 01:01:10.670 "fast_io_fail_timeout_sec": 0, 01:01:10.670 "disable_auto_failback": false, 01:01:10.670 "generate_uuids": false, 01:01:10.670 "transport_tos": 0, 01:01:10.670 "nvme_error_stat": false, 01:01:10.670 "rdma_srq_size": 0, 01:01:10.670 "io_path_stat": false, 01:01:10.670 "allow_accel_sequence": false, 01:01:10.670 "rdma_max_cq_size": 0, 01:01:10.670 "rdma_cm_event_timeout_ms": 0, 01:01:10.670 "dhchap_digests": [ 01:01:10.670 "sha256", 01:01:10.670 "sha384", 01:01:10.670 "sha512" 01:01:10.670 ], 01:01:10.670 "dhchap_dhgroups": [ 01:01:10.670 "null", 01:01:10.670 "ffdhe2048", 01:01:10.670 "ffdhe3072", 01:01:10.670 "ffdhe4096", 01:01:10.670 "ffdhe6144", 01:01:10.670 "ffdhe8192" 01:01:10.670 ] 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_nvme_attach_controller", 01:01:10.670 "params": { 01:01:10.670 "name": "nvme0", 01:01:10.670 "trtype": "TCP", 01:01:10.670 "adrfam": "IPv4", 01:01:10.670 "traddr": "127.0.0.1", 01:01:10.670 "trsvcid": "4420", 01:01:10.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:10.670 "prchk_reftag": false, 01:01:10.670 "prchk_guard": false, 01:01:10.670 "ctrlr_loss_timeout_sec": 0, 01:01:10.670 "reconnect_delay_sec": 0, 01:01:10.670 "fast_io_fail_timeout_sec": 0, 01:01:10.670 "psk": "key0", 01:01:10.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:10.670 "hdgst": false, 01:01:10.670 "ddgst": false, 01:01:10.670 "multipath": "multipath" 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_nvme_set_hotplug", 01:01:10.670 "params": { 01:01:10.670 "period_us": 100000, 01:01:10.670 "enable": false 01:01:10.670 } 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "method": "bdev_wait_for_examine" 01:01:10.670 } 01:01:10.670 ] 01:01:10.670 }, 01:01:10.670 { 01:01:10.670 "subsystem": "nbd", 01:01:10.670 "config": [] 01:01:10.670 } 01:01:10.670 ] 01:01:10.670 }' 01:01:10.670 05:56:04 keyring_file -- keyring/file.sh@115 -- # killprocess 842168 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 842168 ']' 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 842168 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@959 -- # uname 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842168 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842168' 01:01:10.670 killing process with pid 842168 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@973 -- # kill 842168 01:01:10.670 Received shutdown signal, test time was about 1.000000 seconds 01:01:10.670 01:01:10.670 Latency(us) 01:01:10.670 [2024-12-09T04:56:04.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:10.670 [2024-12-09T04:56:04.895Z] =================================================================================================================== 01:01:10.670 [2024-12-09T04:56:04.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:10.670 05:56:04 keyring_file -- common/autotest_common.sh@978 -- # wait 842168 01:01:10.929 05:56:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=843760 01:01:10.929 05:56:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 843760 /var/tmp/bperf.sock 01:01:10.929 05:56:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 843760 ']' 01:01:10.929 05:56:04 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:01:10.929 05:56:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:01:10.929 05:56:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:10.929 05:56:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:01:10.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:01:10.929 05:56:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:01:10.929 "subsystems": [ 01:01:10.929 { 01:01:10.929 "subsystem": "keyring", 01:01:10.929 "config": [ 01:01:10.929 { 01:01:10.929 "method": "keyring_file_add_key", 01:01:10.929 "params": { 01:01:10.929 "name": "key0", 01:01:10.929 "path": "/tmp/tmp.nJzWAlsqKR" 01:01:10.929 } 01:01:10.929 }, 01:01:10.929 { 01:01:10.929 "method": "keyring_file_add_key", 01:01:10.929 "params": { 01:01:10.929 "name": "key1", 01:01:10.929 "path": "/tmp/tmp.r0uvcGTITf" 01:01:10.929 } 01:01:10.929 } 01:01:10.929 ] 01:01:10.929 }, 01:01:10.929 { 01:01:10.929 "subsystem": "iobuf", 01:01:10.929 "config": [ 01:01:10.929 { 01:01:10.929 "method": "iobuf_set_options", 01:01:10.929 "params": { 01:01:10.929 "small_pool_count": 8192, 01:01:10.929 "large_pool_count": 1024, 01:01:10.929 "small_bufsize": 8192, 01:01:10.929 "large_bufsize": 135168, 01:01:10.929 "enable_numa": false 01:01:10.929 } 01:01:10.929 } 01:01:10.929 ] 01:01:10.929 }, 01:01:10.929 { 01:01:10.929 "subsystem": "sock", 01:01:10.929 "config": [ 01:01:10.929 { 01:01:10.929 "method": "sock_set_default_impl", 01:01:10.929 "params": { 01:01:10.929 "impl_name": "posix" 01:01:10.929 } 01:01:10.929 }, 01:01:10.929 { 01:01:10.929 "method": "sock_impl_set_options", 01:01:10.929 "params": { 01:01:10.929 "impl_name": "ssl", 01:01:10.929 "recv_buf_size": 4096, 01:01:10.929 "send_buf_size": 4096, 01:01:10.929 "enable_recv_pipe": true, 01:01:10.929 "enable_quickack": false, 01:01:10.929 "enable_placement_id": 0, 01:01:10.929 "enable_zerocopy_send_server": true, 01:01:10.929 "enable_zerocopy_send_client": false, 01:01:10.929 "zerocopy_threshold": 0, 01:01:10.929 "tls_version": 0, 01:01:10.929 "enable_ktls": false 01:01:10.929 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "sock_impl_set_options", 01:01:10.930 "params": { 01:01:10.930 "impl_name": "posix", 01:01:10.930 "recv_buf_size": 2097152, 01:01:10.930 "send_buf_size": 2097152, 01:01:10.930 "enable_recv_pipe": true, 01:01:10.930 "enable_quickack": false, 01:01:10.930 "enable_placement_id": 0, 01:01:10.930 "enable_zerocopy_send_server": true, 01:01:10.930 "enable_zerocopy_send_client": false, 01:01:10.930 "zerocopy_threshold": 0, 01:01:10.930 "tls_version": 0, 01:01:10.930 "enable_ktls": false 01:01:10.930 } 01:01:10.930 } 01:01:10.930 ] 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "subsystem": "vmd", 01:01:10.930 "config": [] 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "subsystem": "accel", 01:01:10.930 "config": [ 01:01:10.930 { 01:01:10.930 "method": "accel_set_options", 01:01:10.930 "params": { 01:01:10.930 "small_cache_size": 128, 01:01:10.930 "large_cache_size": 16, 01:01:10.930 "task_count": 2048, 01:01:10.930 "sequence_count": 2048, 01:01:10.930 "buf_count": 2048 01:01:10.930 } 01:01:10.930 } 01:01:10.930 ] 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "subsystem": "bdev", 01:01:10.930 "config": [ 01:01:10.930 { 01:01:10.930 "method": "bdev_set_options", 01:01:10.930 "params": { 01:01:10.930 "bdev_io_pool_size": 65535, 01:01:10.930 "bdev_io_cache_size": 256, 01:01:10.930 "bdev_auto_examine": true, 01:01:10.930 "iobuf_small_cache_size": 128, 01:01:10.930 "iobuf_large_cache_size": 16 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_raid_set_options", 01:01:10.930 "params": { 01:01:10.930 "process_window_size_kb": 1024, 01:01:10.930 "process_max_bandwidth_mb_sec": 0 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_iscsi_set_options", 01:01:10.930 "params": { 01:01:10.930 "timeout_sec": 30 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_nvme_set_options", 01:01:10.930 "params": { 01:01:10.930 "action_on_timeout": "none", 01:01:10.930 "timeout_us": 0, 01:01:10.930 "timeout_admin_us": 0, 01:01:10.930 "keep_alive_timeout_ms": 10000, 01:01:10.930 "arbitration_burst": 0, 01:01:10.930 "low_priority_weight": 0, 01:01:10.930 "medium_priority_weight": 0, 01:01:10.930 "high_priority_weight": 0, 01:01:10.930 "nvme_adminq_poll_period_us": 10000, 01:01:10.930 "nvme_ioq_poll_period_us": 0, 01:01:10.930 "io_queue_requests": 512, 01:01:10.930 "delay_cmd_submit": true, 01:01:10.930 "transport_retry_count": 4, 01:01:10.930 "bdev_retry_count": 3, 01:01:10.930 "transport_ack_timeout": 0, 01:01:10.930 "ctrlr_loss_timeout_sec": 0, 01:01:10.930 "reconnect_delay_sec": 0, 01:01:10.930 "fast_io_fail_timeout_sec": 0, 01:01:10.930 "disable_auto_failback": false, 01:01:10.930 "generate_uuids": false, 01:01:10.930 "transport_tos": 0, 01:01:10.930 "nvme_error_stat": false, 01:01:10.930 "rdma_srq_size": 0, 01:01:10.930 05:56:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:10.930 "io_path_stat": false, 01:01:10.930 "allow_accel_sequence": false, 01:01:10.930 "rdma_max_cq_size": 0, 01:01:10.930 "rdma_cm_event_timeout_ms": 0, 01:01:10.930 "dhchap_digests": [ 01:01:10.930 "sha256", 01:01:10.930 "sha384", 01:01:10.930 "sha512" 01:01:10.930 ], 01:01:10.930 "dhchap_dhgroups": [ 01:01:10.930 "null", 01:01:10.930 "ffdhe2048", 01:01:10.930 "ffdhe3072", 01:01:10.930 "ffdhe4096", 01:01:10.930 "ffdhe6144", 01:01:10.930 "ffdhe8192" 01:01:10.930 ] 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_nvme_attach_controller", 01:01:10.930 "params": { 01:01:10.930 "name": "nvme0", 01:01:10.930 "trtype": "TCP", 01:01:10.930 "adrfam": "IPv4", 01:01:10.930 "traddr": "127.0.0.1", 01:01:10.930 "trsvcid": "4420", 01:01:10.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:10.930 "prchk_reftag": false, 01:01:10.930 "prchk_guard": false, 01:01:10.930 "ctrlr_loss_timeout_sec": 0, 01:01:10.930 "reconnect_delay_sec": 0, 01:01:10.930 "fast_io_fail_timeout_sec": 0, 01:01:10.930 "psk": "key0", 01:01:10.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:10.930 "hdgst": false, 01:01:10.930 "ddgst": false, 01:01:10.930 "multipath": "multipath" 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_nvme_set_hotplug", 01:01:10.930 "params": { 01:01:10.930 "period_us": 100000, 01:01:10.930 "enable": false 01:01:10.930 } 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "method": "bdev_wait_for_examine" 01:01:10.930 } 01:01:10.930 ] 01:01:10.930 }, 01:01:10.930 { 01:01:10.930 "subsystem": "nbd", 01:01:10.930 "config": [] 01:01:10.930 } 01:01:10.930 ] 01:01:10.930 }' 01:01:10.930 05:56:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:01:10.930 [2024-12-09 05:56:05.028381] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:01:10.930 [2024-12-09 05:56:05.028456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843760 ] 01:01:10.930 [2024-12-09 05:56:05.095208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:11.188 [2024-12-09 05:56:05.158408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:11.188 [2024-12-09 05:56:05.350303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:01:11.444 05:56:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:11.444 05:56:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:01:11.444 05:56:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:01:11.444 05:56:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:11.444 05:56:05 keyring_file -- keyring/file.sh@121 -- # jq length 01:01:11.701 05:56:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:01:11.701 05:56:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:01:11.701 05:56:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:01:11.701 05:56:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:11.701 05:56:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:11.701 05:56:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:01:11.701 05:56:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:11.959 05:56:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:01:11.959 05:56:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:01:11.959 05:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:01:11.959 05:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:01:11.959 05:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:11.959 05:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:01:11.959 05:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:12.215 05:56:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:01:12.215 05:56:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:01:12.215 05:56:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:01:12.215 05:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:01:12.472 05:56:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:01:12.472 05:56:06 keyring_file -- keyring/file.sh@1 -- # cleanup 01:01:12.472 05:56:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nJzWAlsqKR /tmp/tmp.r0uvcGTITf 01:01:12.472 05:56:06 keyring_file -- keyring/file.sh@20 -- # killprocess 843760 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 843760 ']' 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 843760 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@959 -- # uname 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 843760 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:12.472 05:56:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 843760' 01:01:12.472 killing process with pid 843760 01:01:12.473 05:56:06 keyring_file -- common/autotest_common.sh@973 -- # kill 843760 01:01:12.473 Received shutdown signal, test time was about 1.000000 seconds 01:01:12.473 01:01:12.473 Latency(us) 01:01:12.473 [2024-12-09T04:56:06.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:12.473 [2024-12-09T04:56:06.698Z] =================================================================================================================== 01:01:12.473 [2024-12-09T04:56:06.698Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:01:12.473 05:56:06 keyring_file -- common/autotest_common.sh@978 -- # wait 843760 01:01:12.729 05:56:06 keyring_file -- keyring/file.sh@21 -- # killprocess 842156 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 842156 ']' 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 842156 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@959 -- # uname 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842156 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842156' 01:01:12.730 killing process with pid 842156 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@973 -- # kill 842156 01:01:12.730 05:56:06 keyring_file -- common/autotest_common.sh@978 -- # wait 842156 01:01:13.294 01:01:13.294 real 0m14.700s 01:01:13.294 user 0m37.161s 01:01:13.294 sys 0m3.299s 01:01:13.294 05:56:07 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:13.294 05:56:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:01:13.294 ************************************ 01:01:13.294 END TEST keyring_file 01:01:13.294 ************************************ 01:01:13.294 05:56:07 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:01:13.294 05:56:07 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:01:13.294 05:56:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:13.294 05:56:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:13.294 05:56:07 -- common/autotest_common.sh@10 -- # set +x 01:01:13.294 ************************************ 01:01:13.294 START TEST keyring_linux 01:01:13.294 ************************************ 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:01:13.294 Joined session keyring: 39201512 01:01:13.294 * Looking for test storage... 01:01:13.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@345 -- # : 1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:13.294 05:56:07 keyring_linux -- scripts/common.sh@368 -- # return 0 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 01:01:13.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:13.294 --rc genhtml_branch_coverage=1 01:01:13.294 --rc genhtml_function_coverage=1 01:01:13.294 --rc genhtml_legend=1 01:01:13.294 --rc geninfo_all_blocks=1 01:01:13.294 --rc geninfo_unexecuted_blocks=1 01:01:13.294 01:01:13.294 ' 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 01:01:13.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:13.294 --rc genhtml_branch_coverage=1 01:01:13.294 --rc genhtml_function_coverage=1 01:01:13.294 --rc genhtml_legend=1 01:01:13.294 --rc geninfo_all_blocks=1 01:01:13.294 --rc geninfo_unexecuted_blocks=1 01:01:13.294 01:01:13.294 ' 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 01:01:13.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:13.294 --rc genhtml_branch_coverage=1 01:01:13.294 --rc genhtml_function_coverage=1 01:01:13.294 --rc genhtml_legend=1 01:01:13.294 --rc geninfo_all_blocks=1 01:01:13.294 --rc geninfo_unexecuted_blocks=1 01:01:13.294 01:01:13.294 ' 01:01:13.294 05:56:07 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 01:01:13.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:13.294 --rc genhtml_branch_coverage=1 01:01:13.294 --rc genhtml_function_coverage=1 01:01:13.294 --rc genhtml_legend=1 01:01:13.294 --rc geninfo_all_blocks=1 01:01:13.294 --rc geninfo_unexecuted_blocks=1 01:01:13.294 01:01:13.294 ' 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:01:13.552 05:56:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:01:13.552 05:56:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:13.552 05:56:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:13.552 05:56:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:13.552 05:56:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:13.552 05:56:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:13.552 05:56:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:13.552 05:56:07 keyring_linux -- paths/export.sh@5 -- # export PATH 01:01:13.552 05:56:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:13.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:01:13.552 05:56:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:01:13.552 05:56:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:01:13.552 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@733 -- # python - 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:01:13.553 /tmp/:spdk-test:key0 01:01:13.553 05:56:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:01:13.553 05:56:07 keyring_linux -- nvmf/common.sh@733 -- # python - 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:01:13.553 05:56:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:01:13.553 /tmp/:spdk-test:key1 01:01:13.553 05:56:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=844122 01:01:13.553 05:56:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:01:13.553 05:56:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 844122 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 844122 ']' 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:13.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:13.553 05:56:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:01:13.553 [2024-12-09 05:56:07.682403] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:01:13.553 [2024-12-09 05:56:07.682511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844122 ] 01:01:13.553 [2024-12-09 05:56:07.748670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:13.810 [2024-12-09 05:56:07.809684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:14.066 05:56:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:14.066 05:56:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:01:14.066 05:56:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:01:14.066 05:56:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:01:14.067 [2024-12-09 05:56:08.086733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:14.067 null0 01:01:14.067 [2024-12-09 05:56:08.118770] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:01:14.067 [2024-12-09 05:56:08.119241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:14.067 05:56:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:01:14.067 1037177441 01:01:14.067 05:56:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:01:14.067 921316304 01:01:14.067 05:56:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=844164 01:01:14.067 05:56:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 844164 /var/tmp/bperf.sock 01:01:14.067 05:56:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 844164 ']' 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:01:14.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:14.067 05:56:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:01:14.067 [2024-12-09 05:56:08.193460] Starting SPDK v25.01-pre git sha1 66902d69a / DPDK 24.03.0 initialization... 01:01:14.067 [2024-12-09 05:56:08.193539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844164 ] 01:01:14.067 [2024-12-09 05:56:08.263496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:14.323 [2024-12-09 05:56:08.322307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:14.323 05:56:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:14.323 05:56:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:01:14.323 05:56:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:01:14.323 05:56:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:01:14.581 05:56:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:01:14.581 05:56:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:01:14.838 05:56:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:01:14.838 05:56:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:01:15.094 [2024-12-09 05:56:09.291612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:01:15.351 nvme0n1 01:01:15.351 05:56:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:01:15.351 05:56:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:01:15.351 05:56:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:01:15.351 05:56:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:01:15.351 05:56:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:01:15.351 05:56:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:15.608 05:56:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:01:15.608 05:56:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:01:15.608 05:56:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:01:15.608 05:56:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:01:15.608 05:56:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:01:15.608 05:56:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:01:15.608 05:56:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@25 -- # sn=1037177441 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 1037177441 == \1\0\3\7\1\7\7\4\4\1 ]] 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1037177441 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:01:15.866 05:56:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:01:15.866 Running I/O for 1 seconds... 01:01:17.238 9491.00 IOPS, 37.07 MiB/s 01:01:17.238 Latency(us) 01:01:17.238 [2024-12-09T04:56:11.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:17.238 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:01:17.238 nvme0n1 : 1.01 9494.90 37.09 0.00 0.00 13387.49 6796.33 18835.53 01:01:17.238 [2024-12-09T04:56:11.463Z] =================================================================================================================== 01:01:17.238 [2024-12-09T04:56:11.463Z] Total : 9494.90 37.09 0.00 0.00 13387.49 6796.33 18835.53 01:01:17.238 { 01:01:17.238 "results": [ 01:01:17.238 { 01:01:17.238 "job": "nvme0n1", 01:01:17.238 "core_mask": "0x2", 01:01:17.238 "workload": "randread", 01:01:17.238 "status": "finished", 01:01:17.238 "queue_depth": 128, 01:01:17.238 "io_size": 4096, 01:01:17.238 "runtime": 1.013175, 01:01:17.238 "iops": 9494.904631480247, 01:01:17.238 "mibps": 37.089471216719716, 01:01:17.238 "io_failed": 0, 01:01:17.238 "io_timeout": 0, 01:01:17.238 "avg_latency_us": 13387.492935396936, 01:01:17.238 "min_latency_us": 6796.325925925926, 01:01:17.238 "max_latency_us": 18835.53185185185 01:01:17.238 } 01:01:17.238 ], 01:01:17.238 "core_count": 1 01:01:17.238 } 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:01:17.238 05:56:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:01:17.238 05:56:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:01:17.238 05:56:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:01:17.496 05:56:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:01:17.496 05:56:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:01:17.496 05:56:11 keyring_linux -- keyring/linux.sh@23 -- # return 01:01:17.496 05:56:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:17.496 05:56:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:01:17.496 05:56:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:01:17.754 [2024-12-09 05:56:11.872694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:01:17.754 [2024-12-09 05:56:11.873005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799b40 (107): Transport endpoint is not connected 01:01:17.754 [2024-12-09 05:56:11.873998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799b40 (9): Bad file descriptor 01:01:17.754 [2024-12-09 05:56:11.874997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:01:17.754 [2024-12-09 05:56:11.875015] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:01:17.754 [2024-12-09 05:56:11.875028] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:01:17.754 [2024-12-09 05:56:11.875042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:01:17.754 request: 01:01:17.754 { 01:01:17.754 "name": "nvme0", 01:01:17.754 "trtype": "tcp", 01:01:17.754 "traddr": "127.0.0.1", 01:01:17.754 "adrfam": "ipv4", 01:01:17.754 "trsvcid": "4420", 01:01:17.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:01:17.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:01:17.754 "prchk_reftag": false, 01:01:17.754 "prchk_guard": false, 01:01:17.754 "hdgst": false, 01:01:17.754 "ddgst": false, 01:01:17.754 "psk": ":spdk-test:key1", 01:01:17.754 "allow_unrecognized_csi": false, 01:01:17.754 "method": "bdev_nvme_attach_controller", 01:01:17.754 "req_id": 1 01:01:17.754 } 01:01:17.754 Got JSON-RPC error response 01:01:17.754 response: 01:01:17.754 { 01:01:17.754 "code": -5, 01:01:17.754 "message": "Input/output error" 01:01:17.754 } 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@33 -- # sn=1037177441 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1037177441 01:01:17.754 1 links removed 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@33 -- # sn=921316304 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 921316304 01:01:17.754 1 links removed 01:01:17.754 05:56:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 844164 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 844164 ']' 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 844164 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844164 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844164' 01:01:17.754 killing process with pid 844164 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 844164 01:01:17.754 Received shutdown signal, test time was about 1.000000 seconds 01:01:17.754 01:01:17.754 Latency(us) 01:01:17.754 [2024-12-09T04:56:11.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:01:17.754 [2024-12-09T04:56:11.979Z] =================================================================================================================== 01:01:17.754 [2024-12-09T04:56:11.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:01:17.754 05:56:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 844164 01:01:18.011 05:56:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 844122 01:01:18.011 05:56:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 844122 ']' 01:01:18.011 05:56:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 844122 01:01:18.011 05:56:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:01:18.011 05:56:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:18.011 05:56:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844122 01:01:18.269 05:56:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:18.269 05:56:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:18.269 05:56:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844122' 01:01:18.269 killing process with pid 844122 01:01:18.269 05:56:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 844122 01:01:18.269 05:56:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 844122 01:01:18.527 01:01:18.527 real 0m5.347s 01:01:18.527 user 0m10.467s 01:01:18.527 sys 0m1.676s 01:01:18.527 05:56:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:18.527 05:56:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:01:18.527 ************************************ 01:01:18.527 END TEST keyring_linux 01:01:18.527 ************************************ 01:01:18.527 05:56:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:01:18.527 05:56:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:01:18.527 05:56:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:01:18.527 05:56:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:01:18.527 05:56:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:01:18.527 05:56:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:01:18.527 05:56:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:01:18.527 05:56:12 -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:18.527 05:56:12 -- common/autotest_common.sh@10 -- # set +x 01:01:18.527 05:56:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:01:18.527 05:56:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:01:18.527 05:56:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:01:18.527 05:56:12 -- common/autotest_common.sh@10 -- # set +x 01:01:21.072 INFO: APP EXITING 01:01:21.072 INFO: killing all VMs 01:01:21.072 INFO: killing vhost app 01:01:21.072 INFO: EXIT DONE 01:01:21.636 0000:88:00.0 (8086 0a54): Already using the nvme driver 01:01:21.636 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 01:01:21.636 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 01:01:21.636 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 01:01:21.893 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 01:01:21.893 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 01:01:21.893 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 01:01:21.893 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 01:01:21.893 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 01:01:21.893 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 01:01:21.893 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 01:01:21.893 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 01:01:21.893 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 01:01:21.893 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 01:01:21.893 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 01:01:21.893 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 01:01:21.893 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 01:01:23.265 Cleaning 01:01:23.265 Removing: /var/run/dpdk/spdk0/config 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 01:01:23.265 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:01:23.265 Removing: /var/run/dpdk/spdk0/hugepage_info 01:01:23.265 Removing: /var/run/dpdk/spdk1/config 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 01:01:23.265 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:01:23.265 Removing: /var/run/dpdk/spdk1/hugepage_info 01:01:23.265 Removing: /var/run/dpdk/spdk2/config 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 01:01:23.265 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:01:23.265 Removing: /var/run/dpdk/spdk2/hugepage_info 01:01:23.265 Removing: /var/run/dpdk/spdk3/config 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 01:01:23.265 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:01:23.265 Removing: /var/run/dpdk/spdk3/hugepage_info 01:01:23.265 Removing: /var/run/dpdk/spdk4/config 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 01:01:23.265 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:01:23.265 Removing: /var/run/dpdk/spdk4/hugepage_info 01:01:23.265 Removing: /dev/shm/bdev_svc_trace.1 01:01:23.265 Removing: /dev/shm/nvmf_trace.0 01:01:23.265 Removing: /dev/shm/spdk_tgt_trace.pid521817 01:01:23.265 Removing: /var/run/dpdk/spdk0 01:01:23.265 Removing: /var/run/dpdk/spdk1 01:01:23.265 Removing: /var/run/dpdk/spdk2 01:01:23.265 Removing: /var/run/dpdk/spdk3 01:01:23.265 Removing: /var/run/dpdk/spdk4 01:01:23.265 Removing: /var/run/dpdk/spdk_pid520024 01:01:23.265 Removing: /var/run/dpdk/spdk_pid520760 01:01:23.265 Removing: /var/run/dpdk/spdk_pid521817 01:01:23.265 Removing: /var/run/dpdk/spdk_pid522296 01:01:23.265 Removing: /var/run/dpdk/spdk_pid523339 01:01:23.265 Removing: /var/run/dpdk/spdk_pid523481 01:01:23.265 Removing: /var/run/dpdk/spdk_pid524197 01:01:23.265 Removing: /var/run/dpdk/spdk_pid524320 01:01:23.265 Removing: /var/run/dpdk/spdk_pid524586 01:01:23.265 Removing: /var/run/dpdk/spdk_pid525795 01:01:23.265 Removing: /var/run/dpdk/spdk_pid526829 01:01:23.265 Removing: /var/run/dpdk/spdk_pid527030 01:01:23.265 Removing: /var/run/dpdk/spdk_pid527230 01:01:23.265 Removing: /var/run/dpdk/spdk_pid527560 01:01:23.265 Removing: /var/run/dpdk/spdk_pid527762 01:01:23.265 Removing: /var/run/dpdk/spdk_pid527919 01:01:23.265 Removing: /var/run/dpdk/spdk_pid528075 01:01:23.265 Removing: /var/run/dpdk/spdk_pid528275 01:01:23.265 Removing: /var/run/dpdk/spdk_pid528579 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531070 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531239 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531402 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531501 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531836 01:01:23.265 Removing: /var/run/dpdk/spdk_pid531926 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532270 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532403 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532574 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532583 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532866 01:01:23.265 Removing: /var/run/dpdk/spdk_pid532877 01:01:23.265 Removing: /var/run/dpdk/spdk_pid533379 01:01:23.265 Removing: /var/run/dpdk/spdk_pid533534 01:01:23.265 Removing: /var/run/dpdk/spdk_pid533733 01:01:23.265 Removing: /var/run/dpdk/spdk_pid535973 01:01:23.524 Removing: /var/run/dpdk/spdk_pid538594 01:01:23.524 Removing: /var/run/dpdk/spdk_pid545624 01:01:23.524 Removing: /var/run/dpdk/spdk_pid546152 01:01:23.524 Removing: /var/run/dpdk/spdk_pid548572 01:01:23.524 Removing: /var/run/dpdk/spdk_pid548833 01:01:23.524 Removing: /var/run/dpdk/spdk_pid551482 01:01:23.524 Removing: /var/run/dpdk/spdk_pid555826 01:01:23.524 Removing: /var/run/dpdk/spdk_pid558021 01:01:23.524 Removing: /var/run/dpdk/spdk_pid564454 01:01:23.524 Removing: /var/run/dpdk/spdk_pid569800 01:01:23.524 Removing: /var/run/dpdk/spdk_pid571014 01:01:23.524 Removing: /var/run/dpdk/spdk_pid571688 01:01:23.524 Removing: /var/run/dpdk/spdk_pid582074 01:01:23.524 Removing: /var/run/dpdk/spdk_pid584491 01:01:23.524 Removing: /var/run/dpdk/spdk_pid611876 01:01:23.524 Removing: /var/run/dpdk/spdk_pid615146 01:01:23.524 Removing: /var/run/dpdk/spdk_pid618972 01:01:23.524 Removing: /var/run/dpdk/spdk_pid623246 01:01:23.524 Removing: /var/run/dpdk/spdk_pid623334 01:01:23.524 Removing: /var/run/dpdk/spdk_pid623911 01:01:23.524 Removing: /var/run/dpdk/spdk_pid624561 01:01:23.524 Removing: /var/run/dpdk/spdk_pid625220 01:01:23.524 Removing: /var/run/dpdk/spdk_pid625639 01:01:23.524 Removing: /var/run/dpdk/spdk_pid625646 01:01:23.524 Removing: /var/run/dpdk/spdk_pid625904 01:01:23.524 Removing: /var/run/dpdk/spdk_pid625925 01:01:23.524 Removing: /var/run/dpdk/spdk_pid626043 01:01:23.524 Removing: /var/run/dpdk/spdk_pid626768 01:01:23.524 Removing: /var/run/dpdk/spdk_pid627474 01:01:23.524 Removing: /var/run/dpdk/spdk_pid628519 01:01:23.524 Removing: /var/run/dpdk/spdk_pid628969 01:01:23.524 Removing: /var/run/dpdk/spdk_pid629047 01:01:23.524 Removing: /var/run/dpdk/spdk_pid629193 01:01:23.524 Removing: /var/run/dpdk/spdk_pid630202 01:01:23.524 Removing: /var/run/dpdk/spdk_pid630944 01:01:23.524 Removing: /var/run/dpdk/spdk_pid636278 01:01:23.524 Removing: /var/run/dpdk/spdk_pid664426 01:01:23.524 Removing: /var/run/dpdk/spdk_pid667355 01:01:23.524 Removing: /var/run/dpdk/spdk_pid668543 01:01:23.524 Removing: /var/run/dpdk/spdk_pid669873 01:01:23.524 Removing: /var/run/dpdk/spdk_pid670014 01:01:23.524 Removing: /var/run/dpdk/spdk_pid670155 01:01:23.524 Removing: /var/run/dpdk/spdk_pid670269 01:01:23.524 Removing: /var/run/dpdk/spdk_pid670745 01:01:23.524 Removing: /var/run/dpdk/spdk_pid672062 01:01:23.524 Removing: /var/run/dpdk/spdk_pid672799 01:01:23.524 Removing: /var/run/dpdk/spdk_pid673230 01:01:23.524 Removing: /var/run/dpdk/spdk_pid674853 01:01:23.524 Removing: /var/run/dpdk/spdk_pid675273 01:01:23.524 Removing: /var/run/dpdk/spdk_pid675835 01:01:23.524 Removing: /var/run/dpdk/spdk_pid678491 01:01:23.524 Removing: /var/run/dpdk/spdk_pid682128 01:01:23.524 Removing: /var/run/dpdk/spdk_pid682129 01:01:23.524 Removing: /var/run/dpdk/spdk_pid682130 01:01:23.524 Removing: /var/run/dpdk/spdk_pid684356 01:01:23.524 Removing: /var/run/dpdk/spdk_pid689209 01:01:23.524 Removing: /var/run/dpdk/spdk_pid691988 01:01:23.524 Removing: /var/run/dpdk/spdk_pid695885 01:01:23.524 Removing: /var/run/dpdk/spdk_pid696841 01:01:23.524 Removing: /var/run/dpdk/spdk_pid697936 01:01:23.524 Removing: /var/run/dpdk/spdk_pid699021 01:01:23.524 Removing: /var/run/dpdk/spdk_pid701790 01:01:23.524 Removing: /var/run/dpdk/spdk_pid704372 01:01:23.524 Removing: /var/run/dpdk/spdk_pid706741 01:01:23.524 Removing: /var/run/dpdk/spdk_pid710981 01:01:23.524 Removing: /var/run/dpdk/spdk_pid711046 01:01:23.524 Removing: /var/run/dpdk/spdk_pid713888 01:01:23.524 Removing: /var/run/dpdk/spdk_pid714145 01:01:23.524 Removing: /var/run/dpdk/spdk_pid714277 01:01:23.524 Removing: /var/run/dpdk/spdk_pid714553 01:01:23.524 Removing: /var/run/dpdk/spdk_pid714558 01:01:23.524 Removing: /var/run/dpdk/spdk_pid717555 01:01:23.524 Removing: /var/run/dpdk/spdk_pid717890 01:01:23.524 Removing: /var/run/dpdk/spdk_pid721067 01:01:23.524 Removing: /var/run/dpdk/spdk_pid723052 01:01:23.524 Removing: /var/run/dpdk/spdk_pid726480 01:01:23.524 Removing: /var/run/dpdk/spdk_pid729821 01:01:23.524 Removing: /var/run/dpdk/spdk_pid736316 01:01:23.524 Removing: /var/run/dpdk/spdk_pid740653 01:01:23.524 Removing: /var/run/dpdk/spdk_pid740657 01:01:23.524 Removing: /var/run/dpdk/spdk_pid752999 01:01:23.524 Removing: /var/run/dpdk/spdk_pid753425 01:01:23.524 Removing: /var/run/dpdk/spdk_pid754443 01:01:23.524 Removing: /var/run/dpdk/spdk_pid754852 01:01:23.524 Removing: /var/run/dpdk/spdk_pid755435 01:01:23.524 Removing: /var/run/dpdk/spdk_pid755959 01:01:23.524 Removing: /var/run/dpdk/spdk_pid756370 01:01:23.524 Removing: /var/run/dpdk/spdk_pid756780 01:01:23.524 Removing: /var/run/dpdk/spdk_pid759291 01:01:23.524 Removing: /var/run/dpdk/spdk_pid759545 01:01:23.524 Removing: /var/run/dpdk/spdk_pid763362 01:01:23.524 Removing: /var/run/dpdk/spdk_pid763413 01:01:23.524 Removing: /var/run/dpdk/spdk_pid766782 01:01:23.524 Removing: /var/run/dpdk/spdk_pid769399 01:01:23.524 Removing: /var/run/dpdk/spdk_pid776318 01:01:23.524 Removing: /var/run/dpdk/spdk_pid776830 01:01:23.524 Removing: /var/run/dpdk/spdk_pid779219 01:01:23.524 Removing: /var/run/dpdk/spdk_pid779500 01:01:23.524 Removing: /var/run/dpdk/spdk_pid782123 01:01:23.524 Removing: /var/run/dpdk/spdk_pid785836 01:01:23.524 Removing: /var/run/dpdk/spdk_pid788613 01:01:23.524 Removing: /var/run/dpdk/spdk_pid794995 01:01:23.524 Removing: /var/run/dpdk/spdk_pid800199 01:01:23.524 Removing: /var/run/dpdk/spdk_pid801397 01:01:23.524 Removing: /var/run/dpdk/spdk_pid802155 01:01:23.524 Removing: /var/run/dpdk/spdk_pid812344 01:01:23.524 Removing: /var/run/dpdk/spdk_pid814587 01:01:23.524 Removing: /var/run/dpdk/spdk_pid816598 01:01:23.524 Removing: /var/run/dpdk/spdk_pid821743 01:01:23.524 Removing: /var/run/dpdk/spdk_pid821763 01:01:23.524 Removing: /var/run/dpdk/spdk_pid825185 01:01:23.524 Removing: /var/run/dpdk/spdk_pid826587 01:01:23.524 Removing: /var/run/dpdk/spdk_pid828058 01:01:23.524 Removing: /var/run/dpdk/spdk_pid828852 01:01:23.524 Removing: /var/run/dpdk/spdk_pid830373 01:01:23.524 Removing: /var/run/dpdk/spdk_pid831252 01:01:23.524 Removing: /var/run/dpdk/spdk_pid836565 01:01:23.524 Removing: /var/run/dpdk/spdk_pid836937 01:01:23.524 Removing: /var/run/dpdk/spdk_pid837326 01:01:23.525 Removing: /var/run/dpdk/spdk_pid838890 01:01:23.525 Removing: /var/run/dpdk/spdk_pid839288 01:01:23.525 Removing: /var/run/dpdk/spdk_pid839685 01:01:23.525 Removing: /var/run/dpdk/spdk_pid842156 01:01:23.525 Removing: /var/run/dpdk/spdk_pid842168 01:01:23.525 Removing: /var/run/dpdk/spdk_pid843760 01:01:23.525 Removing: /var/run/dpdk/spdk_pid844122 01:01:23.525 Removing: /var/run/dpdk/spdk_pid844164 01:01:23.525 Clean 01:01:23.783 05:56:17 -- common/autotest_common.sh@1453 -- # return 0 01:01:23.783 05:56:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:01:23.783 05:56:17 -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:23.783 05:56:17 -- common/autotest_common.sh@10 -- # set +x 01:01:23.783 05:56:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:01:23.783 05:56:17 -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:23.783 05:56:17 -- common/autotest_common.sh@10 -- # set +x 01:01:23.783 05:56:17 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:01:23.783 05:56:17 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 01:01:23.783 05:56:17 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 01:01:23.783 05:56:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:01:23.783 05:56:17 -- spdk/autotest.sh@398 -- # hostname 01:01:23.783 05:56:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 01:01:24.041 geninfo: WARNING: invalid characters removed from testname! 01:01:56.112 05:56:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:01:58.654 05:56:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:01.950 05:56:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:05.241 05:56:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:07.786 05:57:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:11.081 05:57:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:14.370 05:57:08 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:02:14.370 05:57:08 -- spdk/autorun.sh@1 -- $ timing_finish 01:02:14.371 05:57:08 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 01:02:14.371 05:57:08 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:02:14.371 05:57:08 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:02:14.371 05:57:08 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:02:14.371 + [[ -n 450268 ]] 01:02:14.371 + sudo kill 450268 01:02:14.379 [Pipeline] } 01:02:14.395 [Pipeline] // stage 01:02:14.400 [Pipeline] } 01:02:14.415 [Pipeline] // timeout 01:02:14.421 [Pipeline] } 01:02:14.435 [Pipeline] // catchError 01:02:14.440 [Pipeline] } 01:02:14.455 [Pipeline] // wrap 01:02:14.461 [Pipeline] } 01:02:14.474 [Pipeline] // catchError 01:02:14.483 [Pipeline] stage 01:02:14.485 [Pipeline] { (Epilogue) 01:02:14.497 [Pipeline] catchError 01:02:14.499 [Pipeline] { 01:02:14.512 [Pipeline] echo 01:02:14.513 Cleanup processes 01:02:14.518 [Pipeline] sh 01:02:14.798 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:14.798 855426 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:14.810 [Pipeline] sh 01:02:15.087 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:15.087 ++ grep -v 'sudo pgrep' 01:02:15.087 ++ awk '{print $1}' 01:02:15.087 + sudo kill -9 01:02:15.087 + true 01:02:15.097 [Pipeline] sh 01:02:15.468 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:02:25.510 [Pipeline] sh 01:02:25.796 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:02:25.796 Artifacts sizes are good 01:02:25.810 [Pipeline] archiveArtifacts 01:02:25.817 Archiving artifacts 01:02:25.962 [Pipeline] sh 01:02:26.247 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:02:26.262 [Pipeline] cleanWs 01:02:26.271 [WS-CLEANUP] Deleting project workspace... 01:02:26.271 [WS-CLEANUP] Deferred wipeout is used... 01:02:26.278 [WS-CLEANUP] done 01:02:26.280 [Pipeline] } 01:02:26.296 [Pipeline] // catchError 01:02:26.303 [Pipeline] sh 01:02:26.576 + logger -p user.info -t JENKINS-CI 01:02:26.584 [Pipeline] } 01:02:26.595 [Pipeline] // stage 01:02:26.600 [Pipeline] } 01:02:26.611 [Pipeline] // node 01:02:26.615 [Pipeline] End of Pipeline 01:02:26.647 Finished: SUCCESS